{"id":2207,"date":"2026-01-17T09:11:26","date_gmt":"2026-01-17T09:11:26","guid":{"rendered":"https:\/\/nicktailor.com\/tech-blog\/?p=2207"},"modified":"2026-01-17T09:11:26","modified_gmt":"2026-01-17T09:11:26","slug":"slurm-accounting-setup-my-personal-notes","status":"publish","type":"post","link":"https:\/\/nicktailor.com\/tech-blog\/slurm-accounting-setup-my-personal-notes\/","title":{"rendered":"SLURM Accounting Setup; my personal notes"},"content":{"rendered":"\n<article>\n\n<p>\nSLURM accounting tracks every job that runs on your cluster \u2014 who submitted it, what resources\nit used, how long it ran, and which account to bill. This data powers fairshare scheduling,\nresource limits, usage reports, and chargeback billing.\n<\/p>\n\n<p>\nThis post walks through setting up SLURM accounting from scratch in a production environment,\nwith the database on a dedicated server separate from the controller.\n<\/p>\n\n<hr \/>\n\n<h2>Architecture Overview<\/h2>\n\n<p>\nIn production, you separate the database from the controller for performance and reliability:\n<\/p>\n\n<pre><code>Controller Node        Database Node          Compute Nodes\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500        \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500          \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nslurmctld              slurmdbd               slurmd\n                       MariaDB\/MySQL          slurmd\n                                              slurmd\n                                              ...\n<\/code><\/pre>\n\n<p><strong>How it works:<\/strong><\/p>\n\n<ul>\n    <li><code>slurmctld<\/code> (scheduler) sends job data to <code>slurmdbd<\/code><\/li>\n    <li><code>slurmdbd<\/code> (database daemon) writes to MariaDB\/MySQL<\/li>\n    <li>Compute nodes (<code>slurmd<\/code>) just run jobs \u2014 no database access<\/li>\n<\/ul>\n\n<p>\nThe controller never talks directly to the database. <code>slurmdbd<\/code> is the middleman\nthat handles connection pooling, batches writes, and queues data if the database is temporarily\nunavailable.\n<\/p>\n\n<hr \/>\n\n<h2>Prerequisites<\/h2>\n\n<p>Before starting, ensure you have:<\/p>\n\n<ul>\n    <li>Working SLURM cluster (slurmctld on controller, slurmd on compute nodes)<\/li>\n    <li>Dedicated database server (can be VM or physical)<\/li>\n    <li>Network connectivity between controller and database server<\/li>\n    <li>Consistent SLURM user\/group (UID\/GID must match across all nodes)<\/li>\n    <li>Munge authentication working across all nodes<\/li>\n<\/ul>\n\n<hr \/>\n\n<h2>Step 1: Install MariaDB on Database Server<\/h2>\n\n<p>On your dedicated database server:<\/p>\n\n<pre><code># Install MariaDB\nsudo apt update\nsudo apt install mariadb-server mariadb-client -y\n\n# Start and enable\nsudo systemctl start mariadb\nsudo systemctl enable mariadb\n\n# Secure installation\nsudo mysql_secure_installation\n<\/code><\/pre>\n\n<p>During secure installation:<\/p>\n<ul>\n    <li>Set root password<\/li>\n    <li>Remove anonymous users \u2014 Yes<\/li>\n    <li>Disallow root login remotely \u2014 Yes<\/li>\n    <li>Remove test database \u2014 Yes<\/li>\n    <li>Reload privilege tables \u2014 Yes<\/li>\n<\/ul>\n\n<hr \/>\n\n<h2>Step 2: Create SLURM Database and User<\/h2>\n\n<p>Log into MariaDB and create the database:<\/p>\n\n<pre><code>sudo mysql -u root -p\n<\/code><\/pre>\n\n<pre><code>-- Create database\nCREATE DATABASE slurm_acct_db;\n\n-- Create slurm user with access from controller node\nCREATE USER 'slurm'@'controller.example.com' IDENTIFIED BY 'your_secure_password';\n\n-- Grant privileges\nGRANT ALL PRIVILEGES ON slurm_acct_db.* TO 'slurm'@'controller.example.com';\n\n-- If slurmdbd runs on the database server itself (alternative setup)\n-- CREATE USER 'slurm'@'localhost' IDENTIFIED BY 'your_secure_password';\n-- GRANT ALL PRIVILEGES ON slurm_acct_db.* TO 'slurm'@'localhost';\n\nFLUSH PRIVILEGES;\nEXIT;\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 3: Configure MariaDB for Remote Access<\/h2>\n\n<p>Edit MariaDB configuration to allow connections from the controller:<\/p>\n\n<pre><code>sudo nano \/etc\/mysql\/mariadb.conf.d\/50-server.cnf\n<\/code><\/pre>\n\n<p>Find and modify the bind-address:<\/p>\n\n<pre><code># Change from\nbind-address = 127.0.0.1\n\n# To (listen on all interfaces)\nbind-address = 0.0.0.0\n\n# Or specific IP\nbind-address = 192.168.1.10\n<\/code><\/pre>\n\n<p>Add performance settings for SLURM workload:<\/p>\n\n<pre><code>[mysqld]\nbind-address = 0.0.0.0\ninnodb_buffer_pool_size = 1G\ninnodb_log_file_size = 64M\ninnodb_lock_wait_timeout = 900\nmax_connections = 200\n<\/code><\/pre>\n\n<p>Restart MariaDB:<\/p>\n\n<pre><code>sudo systemctl restart mariadb\n<\/code><\/pre>\n\n<p>Open firewall if needed:<\/p>\n\n<pre><code># UFW\nsudo ufw allow from 192.168.1.0\/24 to any port 3306\n\n# Or firewalld\nsudo firewall-cmd --permanent --add-rich-rule='rule family=\"ipv4\" source address=\"192.168.1.0\/24\" port protocol=\"tcp\" port=\"3306\" accept'\nsudo firewall-cmd --reload\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 4: Install slurmdbd on Database Server<\/h2>\n\n<p>\nYou can run <code>slurmdbd<\/code> on the database server or the controller. Running it on the\ndatabase server keeps database traffic local.\n<\/p>\n\n<pre><code># On database server\nsudo apt install slurmdbd -y\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 5: Configure slurmdbd<\/h2>\n\n<p>Create the slurmdbd configuration file:<\/p>\n\n<pre><code>sudo nano \/etc\/slurm\/slurmdbd.conf\n<\/code><\/pre>\n\n<pre><code># slurmdbd.conf - SLURM Database Daemon Configuration\n\n# Daemon settings\nDbdHost=dbserver.example.com\nDbdPort=6819\nSlurmUser=slurm\n\n# Logging\nLogFile=\/var\/log\/slurm\/slurmdbd.log\nPidFile=\/run\/slurmdbd.pid\nDebugLevel=info\n\n# Database connection\nStorageType=accounting_storage\/mysql\nStorageHost=localhost\nStoragePort=3306\nStorageUser=slurm\nStoragePass=your_secure_password\nStorageLoc=slurm_acct_db\n\n# Archive settings (optional)\n#ArchiveEvents=yes\n#ArchiveJobs=yes\n#ArchiveResvs=yes\n#ArchiveSteps=no\n#ArchiveSuspend=no\n#ArchiveTXN=no\n#ArchiveUsage=no\n#ArchiveScript=\/usr\/sbin\/slurm.dbd.archive\n\n# Purge old data (optional - keep 12 months)\n#PurgeEventAfter=12months\n#PurgeJobAfter=12months\n#PurgeResvAfter=12months\n#PurgeStepAfter=12months\n#PurgeSuspendAfter=12months\n#PurgeTXNAfter=12months\n#PurgeUsageAfter=12months\n<\/code><\/pre>\n\n<p>Set proper permissions:<\/p>\n\n<pre><code># slurmdbd.conf must be readable only by SlurmUser (contains password)\nsudo chown slurm:slurm \/etc\/slurm\/slurmdbd.conf\nsudo chmod 600 \/etc\/slurm\/slurmdbd.conf\n\n# Create log directory\nsudo mkdir -p \/var\/log\/slurm\nsudo chown slurm:slurm \/var\/log\/slurm\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 6: Start slurmdbd<\/h2>\n\n<p>Start the daemon and verify it connects to the database:<\/p>\n\n<pre><code># Start slurmdbd\nsudo systemctl start slurmdbd\nsudo systemctl enable slurmdbd\n\n# Check status\nsudo systemctl status slurmdbd\n\n# Check logs for errors\nsudo tail -f \/var\/log\/slurm\/slurmdbd.log\n<\/code><\/pre>\n\n<p>Successful startup looks like:<\/p>\n\n<pre><code>slurmdbd: debug:  slurmdbd version 23.02.4 started\nslurmdbd: debug:  Listening on 0.0.0.0:6819\nslurmdbd: info:   Registering cluster(s) with database\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 7: Configure slurmctld to Use Accounting<\/h2>\n\n<p>On your controller node, edit slurm.conf:<\/p>\n\n<pre><code>sudo nano \/etc\/slurm\/slurm.conf\n<\/code><\/pre>\n\n<p>Add accounting configuration:<\/p>\n\n<pre><code># Accounting settings\nAccountingStorageType=accounting_storage\/slurmdbd\nAccountingStorageHost=dbserver.example.com\nAccountingStoragePort=6819\nAccountingStorageEnforce=associations,limits,qos,safe\n\n# Job completion logging\nJobCompType=jobcomp\/none\nJobAcctGatherType=jobacct_gather\/linux\nJobAcctGatherFrequency=30\n\n# Process tracking (required for accurate accounting)\nProctrackType=proctrack\/cgroup\nTaskPlugin=task\/cgroup,task\/affinity\n<\/code><\/pre>\n\n<p><strong>AccountingStorageEnforce options:<\/strong><\/p>\n\n<ul>\n    <li><strong>associations<\/strong> \u2014 Users must have valid account association to submit jobs<\/li>\n    <li><strong>limits<\/strong> \u2014 Enforce resource limits set on accounts\/users<\/li>\n    <li><strong>qos<\/strong> \u2014 Enforce Quality of Service settings<\/li>\n    <li><strong>safe<\/strong> \u2014 Only allow jobs that can run within limits<\/li>\n<\/ul>\n\n<hr \/>\n\n<h2>Step 8: Open Firewall for slurmdbd<\/h2>\n\n<p>On the database server, allow connections from the controller:<\/p>\n\n<pre><code># UFW\nsudo ufw allow from 192.168.1.0\/24 to any port 6819\n\n# Or firewalld\nsudo firewall-cmd --permanent --add-port=6819\/tcp\nsudo firewall-cmd --reload\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 9: Restart slurmctld<\/h2>\n\n<p>On the controller:<\/p>\n\n<pre><code>sudo systemctl restart slurmctld\n\n# Check it connected to slurmdbd\nsudo tail -f \/var\/log\/slurm\/slurmctld.log\n<\/code><\/pre>\n\n<p>Look for:<\/p>\n\n<pre><code>slurmctld: accounting_storage\/slurmdbd: init: AccountingStorageHost=dbserver.example.com:6819\nslurmctld: accounting_storage\/slurmdbd: init: Database connection established\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 10: Create Cluster in Database<\/h2>\n\n<p>Register your cluster with the accounting database:<\/p>\n\n<pre><code>sudo sacctmgr add cluster mycluster\n<\/code><\/pre>\n\n<p>Verify:<\/p>\n\n<pre><code>sacctmgr show cluster\n   Cluster     ControlHost  ControlPort   RPC     Share GrpJobs       GrpTRES GrpSubmit MaxJobs       MaxTRES MaxSubmit     MaxWall                  QOS   Def QOS\n---------- --------------- ------------ ----- --------- ------- ------------- --------- ------- ------------- --------- ----------- -------------------- ---------\n mycluster  controller.ex.         6817  9728         1                                                                                           normal\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 11: Create Accounts<\/h2>\n\n<p>Create your account hierarchy:<\/p>\n\n<pre><code># Create parent account (organisation)\nsudo sacctmgr add account science Description=\"Science Division\" Organization=\"MyOrg\"\n\n# Create department accounts under science\nsudo sacctmgr add account physics Description=\"Physics Department\" Organization=\"MyOrg\" Parent=science\nsudo sacctmgr add account chemistry Description=\"Chemistry Department\" Organization=\"MyOrg\" Parent=science\nsudo sacctmgr add account biology Description=\"Biology Department\" Organization=\"MyOrg\" Parent=science\n\n# Create standalone accounts\nsudo sacctmgr add account ai Description=\"AI Research\" Organization=\"MyOrg\"\nsudo sacctmgr add account engineering Description=\"Engineering\" Organization=\"MyOrg\"\n<\/code><\/pre>\n\n<p>View account hierarchy:<\/p>\n\n<pre><code>sacctmgr show account -s\n   Account                Descr                  Org\n---------- -------------------- --------------------\n   science       Science Division                MyOrg\n    physics    Physics Department                MyOrg\n  chemistry  Chemistry Department                MyOrg\n    biology    Biology Department                MyOrg\n        ai          AI Research                MyOrg\nengineering          Engineering                MyOrg\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 12: Add Users to Accounts<\/h2>\n\n<pre><code># Add users to accounts\nsudo sacctmgr add user jsmith Account=physics\nsudo sacctmgr add user kwilson Account=ai\nsudo sacctmgr add user pjones Account=chemistry\n\n# User can belong to multiple accounts\nsudo sacctmgr add user jsmith Account=ai\n\n# Set default account for user\nsudo sacctmgr modify user jsmith set DefaultAccount=physics\n<\/code><\/pre>\n\n<p>View user associations:<\/p>\n\n<pre><code>sacctmgr show assoc format=Cluster,Account,User,Partition,Share,MaxJobs,MaxCPUs\n   Cluster    Account       User  Partition     Share  MaxJobs  MaxCPUs\n---------- ---------- ---------- ---------- --------- -------- --------\n mycluster    physics     jsmith                    1\n mycluster         ai     jsmith                    1\n mycluster         ai    kwilson                    1\n mycluster  chemistry     pjones                    1\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 13: Set Resource Limits<\/h2>\n\n<p>Apply limits at account or user level:<\/p>\n\n<pre><code># Limit physics account to 500 CPUs max, 50 concurrent jobs\nsudo sacctmgr modify account physics set MaxCPUs=500 MaxJobs=50\n\n# Limit specific user\nsudo sacctmgr modify user jsmith set MaxCPUs=100 MaxJobs=10\n\n# Limit by partition\nsudo sacctmgr modify user jsmith where partition=gpu set MaxCPUs=32 MaxJobs=2\n<\/code><\/pre>\n\n<p>View limits:<\/p>\n\n<pre><code>sacctmgr show assoc format=Cluster,Account,User,Partition,MaxJobs,MaxCPUs,MaxNodes\n   Cluster    Account       User  Partition  MaxJobs  MaxCPUs MaxNodes\n---------- ---------- ---------- ---------- -------- -------- --------\n mycluster    physics                              50      500\n mycluster    physics     jsmith                   10      100\n mycluster    physics     jsmith        gpu         2       32\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 14: Configure Fairshare<\/h2>\n\n<p>Fairshare adjusts job priority based on historical usage. Heavy users get lower priority.<\/p>\n\n<pre><code># Set shares (relative weight) for accounts\nsudo sacctmgr modify account physics set Fairshare=100\nsudo sacctmgr modify account chemistry set Fairshare=100\nsudo sacctmgr modify account ai set Fairshare=200  # AI gets double weight\n<\/code><\/pre>\n\n<p>Enable fairshare in slurm.conf on the controller:<\/p>\n\n<pre><code># Priority settings\nPriorityType=priority\/multifactor\nPriorityWeightFairshare=10000\nPriorityWeightAge=1000\nPriorityWeightPartition=1000\nPriorityWeightJobSize=500\nPriorityDecayHalfLife=7-0\nPriorityUsageResetPeriod=MONTHLY\n<\/code><\/pre>\n\n<p>Restart slurmctld after changes:<\/p>\n\n<pre><code>sudo systemctl restart slurmctld\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Step 15: Verify Everything Works<\/h2>\n\n<h3>Test job submission with accounting:<\/h3>\n\n<pre><code># Submit job with account\nsbatch --account=physics --job-name=test --wrap=\"sleep 60\"\n\n# Check it's tracked\nsqueue\nsacct -j JOBID\n<\/code><\/pre>\n\n<h3>Check database connectivity:<\/h3>\n\n<pre><code># From controller\nsacctmgr show cluster\nsacctmgr show account\nsacctmgr show assoc\n<\/code><\/pre>\n\n<h3>Verify accounting is enforced:<\/h3>\n\n<pre><code># Try submitting without valid account (should fail if enforce=associations)\nsbatch --account=nonexistent --wrap=\"hostname\"\n# Expected: error: Unable to allocate resources: Invalid account\n<\/code><\/pre>\n\n<h3>Check usage reports:<\/h3>\n\n<pre><code>sreport cluster utilization\nsreport user top start=2026-01-01\nsreport account top start=2026-01-01\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>Useful sacctmgr Commands<\/h2>\n\n<table>\n    <thead>\n        <tr>\n            <th>Command<\/th>\n            <th>Purpose<\/th>\n        <\/tr>\n    <\/thead>\n    <tbody>\n        <tr>\n            <td><code>sacctmgr show cluster<\/code><\/td>\n            <td>List registered clusters<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr show account<\/code><\/td>\n            <td>List all accounts<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr show account -s<\/code><\/td>\n            <td>Show account hierarchy<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr show user<\/code><\/td>\n            <td>List all users<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr show assoc<\/code><\/td>\n            <td>Show all associations (user-account mappings)<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr add account NAME<\/code><\/td>\n            <td>Create new account<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr add user NAME Account=X<\/code><\/td>\n            <td>Add user to account<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr modify account X set MaxCPUs=Y<\/code><\/td>\n            <td>Set account limits<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr modify user X set MaxJobs=Y<\/code><\/td>\n            <td>Set user limits<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr delete user NAME Account=X<\/code><\/td>\n            <td>Remove user from account<\/td>\n        <\/tr>\n        <tr>\n            <td><code>sacctmgr delete account NAME<\/code><\/td>\n            <td>Delete account<\/td>\n        <\/tr>\n    <\/tbody>\n<\/table>\n\n<hr \/>\n\n<h2>Troubleshooting<\/h2>\n\n<h3>slurmdbd won&#8217;t start<\/h3>\n\n<pre><code># Check logs\nsudo tail -100 \/var\/log\/slurm\/slurmdbd.log\n\n# Common issues:\n# - Wrong database credentials in slurmdbd.conf\n# - MySQL not running\n# - Permissions on slurmdbd.conf (must be 600, owned by slurm)\n# - Munge not running\n<\/code><\/pre>\n\n<h3>slurmctld can&#8217;t connect to slurmdbd<\/h3>\n\n<pre><code># Test connectivity\ntelnet dbserver.example.com 6819\n\n# Check firewall\nsudo ufw status\nsudo firewall-cmd --list-all\n\n# Verify slurmdbd is listening\nss -tlnp | grep 6819\n<\/code><\/pre>\n\n<h3>Jobs not being tracked<\/h3>\n\n<pre><code># Verify accounting is enabled\nscontrol show config | grep AccountingStorage\n\n# Should show:\n# AccountingStorageType = accounting_storage\/slurmdbd\n\n# Check association exists for user\nsacctmgr show assoc user=jsmith\n<\/code><\/pre>\n\n<h3>Database connection errors<\/h3>\n\n<pre><code># Test MySQL connection from slurmdbd host\nmysql -h localhost -u slurm -p slurm_acct_db\n\n# Check MySQL is accepting connections\nsudo systemctl status mariadb\nsudo tail -100 \/var\/log\/mysql\/error.log\n<\/code><\/pre>\n\n<hr \/>\n\n<h2>My Thoughts<\/h2>\n\n<p>\nSetting up SLURM accounting properly from the start saves headaches later. Once it&#8217;s running,\nyou get automatic tracking of every job, fair scheduling between groups, and the data you\nneed for billing and capacity planning.\n<\/p>\n\n<p>\nKey points to remember:\n<\/p>\n\n<ul>\n    <li>Keep the database separate from the controller in production<\/li>\n    <li><code>slurmdbd<\/code> is the middleman \u2014 controller never hits the database directly<\/li>\n    <li>Compute nodes don&#8217;t need database access, they just run jobs<\/li>\n    <li>Set up your account hierarchy before adding users<\/li>\n    <li>Use <code>AccountingStorageEnforce<\/code> to make accounting mandatory<\/li>\n    <li>Fairshare prevents any single group from hogging the cluster<\/li>\n<\/ul>\n\n<p>\nThe database is your audit trail. It tracks everything, so when someone asks &#8220;why is my job\nslow&#8221; or &#8220;how much did we use last month&#8221;, you have the answers.\n<\/p>\n\n<\/article>\n","protected":false},"excerpt":{"rendered":"<p>SLURM accounting tracks every job that runs on your cluster \u2014 who submitted it, what resources it used, how long it ran, and which account to bill. This data powers fairshare scheduling, resource limits, usage reports, and chargeback billing. This post walks through setting up SLURM accounting from scratch in a production environment, with the database on a dedicated server<a href=\"https:\/\/nicktailor.com\/tech-blog\/slurm-accounting-setup-my-personal-notes\/\" class=\"read-more\">Read More &#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[143],"tags":[],"class_list":["post-2207","post","type-post","status-publish","format-standard","hentry","category-hpc"],"_links":{"self":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2207","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/comments?post=2207"}],"version-history":[{"count":1,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2207\/revisions"}],"predecessor-version":[{"id":2208,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2207\/revisions\/2208"}],"wp:attachment":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/media?parent=2207"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/categories?post=2207"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/tags?post=2207"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}