{"id":2221,"date":"2025-03-28T06:24:34","date_gmt":"2025-03-28T06:24:34","guid":{"rendered":"https:\/\/nicktailor.com\/tech-blog\/?p=2221"},"modified":"2026-01-28T06:26:02","modified_gmt":"2026-01-28T06:26:02","slug":"openshift-architecture-migration-design-building-secure-scalable-enterprise-platforms","status":"publish","type":"post","link":"https:\/\/nicktailor.com\/tech-blog\/openshift-architecture-migration-design-building-secure-scalable-enterprise-platforms\/","title":{"rendered":"OpenShift Architecture &amp; Migration Design: Building Secure, Scalable Enterprise Platforms"},"content":{"rendered":"\n<p>Designing and migrating to OpenShift is not about installing a cluster. It is about controlling failure domains, aligning schedulers, and avoiding hidden infrastructure bottlenecks that only surface under load or during outages.<\/p>\n\n\n\n<p>This post walks through concrete implementation patterns using Terraform and Ansible, explains why they exist, and highlights what will break if you get them wrong.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Migration Strategy: Phased Approach<\/h2>\n\n\n\n<p>Every failed migration I have seen skipped or compressed one of these phases. The pressure to &#8220;just move it&#8221; creates technical debt that surfaces as production incidents.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 1: Discovery and Assessment<\/h3>\n\n\n\n<p>Before touching infrastructure, you need a complete inventory of what exists and how it behaves.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# VMware dependency discovery script\n# Export VM metadata, network connections, storage mappings\n\n$vms = Get-VM | Select-Object Name, PowerState, NumCpu, MemoryGB,\n  @{N='Datastore';E={(Get-Datastore -VM $_).Name}},\n  @{N='Network';E={(Get-NetworkAdapter -VM $_).NetworkName}},\n  @{N='VMHost';E={$_.VMHost.Name}}\n\n$vms | Export-Csv -Path \"vm-inventory.csv\" -NoTypeInformation\n\n# Capture network flows for dependency mapping\n# Run for minimum 2 weeks to capture batch jobs and monthly processes\n<\/code><\/pre>\n\n\n\n<p>What you are looking for:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Hard-coded IPs in application configs<\/li>\n\n\n\n<li>NFS mounts and shared storage dependencies<\/li>\n\n\n\n<li>Inter-VM communication patterns (what talks to what)<\/li>\n\n\n\n<li>Authentication integrations (LDAP, AD, service accounts)<\/li>\n\n\n\n<li>Scheduled jobs and their timing dependencies<\/li>\n<\/ul>\n\n\n\n<p><strong>Assessment deliverables:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Application dependency map<\/li>\n\n\n\n<li>Containerization readiness score per workload<\/li>\n\n\n\n<li>Risk register with mitigation strategies<\/li>\n\n\n\n<li>Estimated effort per application (T-shirt sizing)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 2: Target Architecture Design<\/h3>\n\n\n\n<p>Design the OpenShift environment before migration begins. This includes cluster topology, namespace strategy, and resource quotas.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Namespace strategy example\n# Environments separated by namespace, not cluster\n\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: app-prod\n  labels:\n    environment: production\n    cost-center: \"12345\"\n    data-classification: confidential\n---\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: app-prod-quota\n  namespace: app-prod\nspec:\n  hard:\n    requests.cpu: \"40\"\n    requests.memory: 80Gi\n    limits.cpu: \"80\"\n    limits.memory: 160Gi\n    persistentvolumeclaims: \"20\"\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 3: Pilot Migration<\/h3>\n\n\n\n<p>Select two to three non-critical applications that exercise different patterns:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>One stateless web application<\/li>\n\n\n\n<li>One application with persistent storage<\/li>\n\n\n\n<li>One application with external integrations<\/li>\n<\/ul>\n\n\n\n<p>The pilot validates your tooling, processes, and assumptions before you scale.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 4: Wave Migration<\/h3>\n\n\n\n<p>Group applications into waves based on dependencies and risk. Each wave should be independently deployable and rollback-capable.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Wave planning structure\nwave_1:\n  applications:\n    - name: static-website\n      risk: low\n      dependencies: none\n      estimated_downtime: 0\n  success_criteria:\n    - all pods healthy for 24 hours\n    - response times within 10% of baseline\n    - zero error rate increase\n\nwave_2:\n  applications:\n    - name: api-gateway\n      risk: medium\n      dependencies:\n        - static-website\n      estimated_downtime: 5 minutes\n  gate:\n    - wave_1 success criteria met\n    - stakeholder sign-off\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 5: Cutover and Decommission<\/h3>\n\n\n\n<p>Final traffic switch and legacy teardown. This is where DNS TTL planning matters.<\/p>\n\n\n\n<p><strong>Common cutover failures:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>DNS TTLs not reduced in advance (reduce to 60 seconds, 48 hours before cutover)<\/li>\n\n\n\n<li>Client-side caching ignoring TTL<\/li>\n\n\n\n<li>Hardcoded IPs in partner systems<\/li>\n\n\n\n<li>Certificate mismatches after DNS change<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">VMware to OpenShift: Migration Patterns<\/h2>\n\n\n\n<p>Not every VM becomes a container. The migration pattern depends on application architecture, not convenience.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Pattern 1: Lift and Containerize<\/h3>\n\n\n\n<p>For applications that are already 12-factor compliant or close to it. Package existing binaries into containers with minimal modification.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Dockerfile for legacy Java application\nFROM registry.access.redhat.com\/ubi8\/openjdk-11-runtime\n\nCOPY target\/app.jar \/deployments\/app.jar\n\nENV JAVA_OPTS=\"-Xms512m -Xmx2048m\"\n\nEXPOSE 8080\nCMD &#91;\"java\", \"-jar\", \"\/deployments\/app.jar\"]\n<\/code><\/pre>\n\n\n\n<p><strong>When to use:<\/strong> Application reads config from environment variables, logs to stdout, and has no local state.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Pattern 2: Replatform with Refactoring<\/h3>\n\n\n\n<p>Application requires changes to run in containers but core logic remains. Typical changes include externalizing configuration and adding health endpoints.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Spring Boot health endpoint addition\nmanagement:\n  endpoints:\n    web:\n      exposure:\n        include: health,info,prometheus\n  endpoint:\n    health:\n      probes:\n        enabled: true\n      show-details: always\n<\/code><\/pre>\n\n\n\n<p><strong>When to use:<\/strong> Application has some container-unfriendly patterns (file-based config, local logging) but is otherwise sound.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Pattern 3: Retain on VM<\/h3>\n\n\n\n<p>Some workloads should not be containerized:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Legacy applications with kernel dependencies<\/li>\n\n\n\n<li>Workloads requiring specific hardware (GPU passthrough, SR-IOV)<\/li>\n\n\n\n<li>Applications with licensing tied to VM or physical host<\/li>\n\n\n\n<li>Databases with extreme I\/O requirements (evaluate case by case)<\/li>\n<\/ul>\n\n\n\n<p>OpenShift Virtualization (KubeVirt) can run VMs alongside containers when needed.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\napiVersion: kubevirt.io\/v1\nkind: VirtualMachine\nmetadata:\n  name: legacy-app-vm\nspec:\n  running: true\n  template:\n    spec:\n      domain:\n        cpu:\n          cores: 4\n        memory:\n          guest: 8Gi\n        devices:\n          disks:\n            - name: rootdisk\n              disk:\n                bus: virtio\n      volumes:\n        - name: rootdisk\n          persistentVolumeClaim:\n            claimName: legacy-app-pvc\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Pattern 4: Rebuild or Replace<\/h3>\n\n\n\n<p>Application is fundamentally incompatible and would require complete rewrite. Evaluate whether a commercial off-the-shelf replacement makes more sense.<\/p>\n\n\n\n<p><strong>Decision matrix:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><th>Factor<\/th><th>Containerize<\/th><th>Keep on VM<\/th><th>Replace<\/th><\/tr><tr><td>Strategic value<\/td><td>High<\/td><td>Low\/Legacy<\/td><td>Medium<\/td><\/tr><tr><td>Maintenance cost<\/td><td>Acceptable<\/td><td>High but stable<\/td><td>Unsustainable<\/td><\/tr><tr><td>12-factor compliance<\/td><td>Partial or full<\/td><td>None<\/td><td>N\/A<\/td><\/tr><tr><td>Vendor support<\/td><td>Available<\/td><td>Legacy only<\/td><td>N\/A<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Infrastructure Provisioning with Terraform<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Why Terraform First<\/h3>\n\n\n\n<p>OpenShift installation assumes the underlying infrastructure is deterministic. If VM placement, CPU topology, or networking varies between environments, the cluster will behave differently under identical workloads. Terraform is used to lock infrastructure intent before OpenShift ever runs.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Example: vSphere Control Plane and Worker Provisioning<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\nprovider \"vsphere\" {\n  user           = var.vsphere_user\n  password       = var.vsphere_password\n  vsphere_server = var.vsphere_server\n  allow_unverified_ssl = true\n}\n\ndata \"vsphere_datacenter\" \"dc\" {\n  name = var.datacenter\n}\n\ndata \"vsphere_compute_cluster\" \"cluster\" {\n  name          = var.cluster\n  datacenter_id = data.vsphere_datacenter.dc.id\n}\n\ndata \"vsphere_datastore\" \"datastore\" {\n  name          = var.datastore\n  datacenter_id = data.vsphere_datacenter.dc.id\n}\n\ndata \"vsphere_network\" \"network\" {\n  name          = var.network\n  datacenter_id = data.vsphere_datacenter.dc.id\n}\n\nresource \"vsphere_virtual_machine\" \"control_plane\" {\n  count            = 3\n  name             = \"ocp-master-${count.index}\"\n  resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id\n  datastore_id     = data.vsphere_datastore.datastore.id\n  folder           = var.vm_folder\n\n  num_cpus = 8\n  memory   = 32768\n  guest_id = \"rhel8_64Guest\"\n\n  # Critical: Reservations prevent resource contention\n  cpu_reservation    = 8000\n  memory_reservation = 32768\n\n  # Anti-affinity rule reference\n  depends_on = &#91;vsphere_compute_cluster_vm_anti_affinity_rule.control_plane_anti_affinity]\n\n  network_interface {\n    network_id = data.vsphere_network.network.id\n  }\n\n  disk {\n    label            = \"root\"\n    size             = 120\n    thin_provisioned = false  # Thick provisioning for control plane\n  }\n\n  disk {\n    label            = \"etcd\"\n    size             = 100\n    thin_provisioned = false\n    unit_number      = 1\n  }\n}\n\n# Anti-affinity ensures control plane nodes run on different hosts\nresource \"vsphere_compute_cluster_vm_anti_affinity_rule\" \"control_plane_anti_affinity\" {\n  name               = \"ocp-control-plane-anti-affinity\"\n  compute_cluster_id = data.vsphere_compute_cluster.cluster.id\n  virtual_machine_ids = &#91;for vm in vsphere_virtual_machine.control_plane : vm.id]\n}\n<\/code><\/pre>\n\n\n\n<p>CPU and memory reservations are not optional. Without them, vSphere ballooning and scheduling delays will surface as random etcd latency and API instability.<\/p>\n\n\n\n<p><strong>What usually breaks:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>etcd timeouts under load (etcd requires consistent sub-10ms disk latency)<\/li>\n\n\n\n<li>API server flapping during node pressure<\/li>\n\n\n\n<li>Unexplained cluster degradation after vMotion events<\/li>\n\n\n\n<li>Split-brain scenarios when anti-affinity is not enforced<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Worker Node Pools by Workload Type<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\nresource \"vsphere_virtual_machine\" \"workers_general\" {\n  count = 6\n  name  = \"ocp-worker-general-${count.index}\"\n\n  num_cpus = 8\n  memory   = 32768\n\n  # General workers can use thin provisioning\n  disk {\n    label            = \"root\"\n    size             = 120\n    thin_provisioned = true\n  }\n}\n\nresource \"vsphere_virtual_machine\" \"workers_stateful\" {\n  count = 3\n  name  = \"ocp-worker-stateful-${count.index}\"\n\n  num_cpus = 16\n  memory   = 65536\n\n  # Stateful workers need guaranteed resources\n  cpu_reservation    = 16000\n  memory_reservation = 65536\n\n  disk {\n    label            = \"root\"\n    size             = 120\n    thin_provisioned = false\n  }\n}\n\nresource \"vsphere_virtual_machine\" \"workers_infra\" {\n  count = 3\n  name  = \"ocp-worker-infra-${count.index}\"\n\n  num_cpus = 8\n  memory   = 32768\n\n  # Infrastructure nodes for routers, monitoring, logging\n  disk {\n    label            = \"root\"\n    size             = 200\n    thin_provisioned = false\n  }\n}\n<\/code><\/pre>\n\n\n\n<p>Different workloads require different failure and performance characteristics. Trying to &#8220;let Kubernetes figure it out&#8221; leads to noisy neighbors and unpredictable latency.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Post-Provision Configuration with Ansible<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Why Ansible Is Still Required<\/h3>\n\n\n\n<p>Terraform stops at infrastructure. OpenShift nodes require OS-level hardening, kernel tuning, and configuration consistency before installation. Ignoring this step leads to subtle instability that manifests weeks later.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Example: Node OS Hardening<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n---\n- name: Prepare OpenShift nodes\n  hosts: openshift_nodes\n  become: true\n  tasks:\n\n    - name: Disable swap\n      command: swapoff -a\n      changed_when: false\n\n    - name: Remove swap from fstab\n      replace:\n        path: \/etc\/fstab\n        regexp: '^(&#91;^#].*swap.*)$'\n        replace: '# \\1'\n\n    - name: Set kernel parameters for OpenShift\n      sysctl:\n        name: \"{{ item.key }}\"\n        value: \"{{ item.value }}\"\n        state: present\n        sysctl_file: \/etc\/sysctl.d\/99-openshift.conf\n      loop:\n        - { key: net.ipv4.ip_forward, value: 1 }\n        - { key: net.bridge.bridge-nf-call-iptables, value: 1 }\n        - { key: net.bridge.bridge-nf-call-ip6tables, value: 1 }\n        - { key: vm.max_map_count, value: 262144 }\n        - { key: fs.inotify.max_user_watches, value: 1048576 }\n        - { key: fs.inotify.max_user_instances, value: 8192 }\n        - { key: net.core.somaxconn, value: 32768 }\n        - { key: net.ipv4.tcp_max_syn_backlog, value: 32768 }\n\n    - name: Load required kernel modules\n      modprobe:\n        name: \"{{ item }}\"\n        state: present\n      loop:\n        - br_netfilter\n        - overlay\n        - ip_vs\n        - ip_vs_rr\n        - ip_vs_wrr\n        - ip_vs_sh\n\n    - name: Ensure kernel modules load on boot\n      copy:\n        dest: \/etc\/modules-load.d\/openshift.conf\n        content: |\n          br_netfilter\n          overlay\n          ip_vs\n          ip_vs_rr\n          ip_vs_wrr\n          ip_vs_sh\n<\/code><\/pre>\n\n\n\n<p>These values are not arbitrary. OpenShift components and container runtimes will fail silently or degrade under load if kernel defaults are used.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Container Runtime Configuration<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n- name: Configure CRI-O\n  copy:\n    dest: \/etc\/crio\/crio.conf.d\/99-custom.conf\n    content: |<\/code><\/pre>\n\n\n<p>[crio.runtime]<\/p>\n\n\n\n<p>default_ulimits = [ &#8220;nofile=1048576:1048576&#8221; ] pids_limit = 4096<\/p>\n\n\n<p>[crio.image]<\/p>\n\n\n\n<p>pause_image = &#8220;registry.redhat.io\/openshift4\/ose-pod:latest&#8221; &#8211; name: Configure container storage copy: dest: \/etc\/containers\/storage.conf content: |<\/p>\n\n\n<p>[storage]<\/p>\n\n\n\n<p>driver = &#8220;overlay&#8221; runroot = &#8220;\/run\/containers\/storage&#8221; graphroot = &#8220;\/var\/lib\/containers\/storage&#8221;<\/p>\n\n\n<p>[storage.options.overlay]<\/p>\n\n\n\n<p>mountopt = &#8220;nodev,metacopy=on&#8221;<\/p>\n\n\n\n<p>Default ulimits are insufficient for high-density clusters. You will hit file descriptor exhaustion before CPU or memory limits.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Security Architecture<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">RBAC Design Principles<\/h3>\n\n\n\n<p>Role-based access control should follow least privilege. Avoid cluster-admin grants; use namespace-scoped roles.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Developer role - namespace scoped\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  name: developer\n  namespace: app-dev\nrules:\n  - apiGroups: &#91;\"\", \"apps\", \"batch\"]\n    resources: &#91;\"pods\", \"deployments\", \"services\", \"configmaps\", \"secrets\", \"jobs\", \"cronjobs\"]\n    verbs: &#91;\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n  - apiGroups: &#91;\"\"]\n    resources: &#91;\"pods\/log\", \"pods\/exec\"]\n    verbs: &#91;\"get\", \"create\"]\n  # Explicitly deny access to cluster resources\n  - apiGroups: &#91;\"\"]\n    resources: &#91;\"nodes\", \"persistentvolumes\"]\n    verbs: &#91;]\n---\n# Operations role - read-only cluster wide, write in specific namespaces\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: operations-readonly\nrules:\n  - apiGroups: &#91;\"\"]\n    resources: &#91;\"nodes\", \"namespaces\", \"persistentvolumes\"]\n    verbs: &#91;\"get\", \"list\", \"watch\"]\n  - apiGroups: &#91;\"\"]\n    resources: &#91;\"pods\", \"services\", \"endpoints\"]\n    verbs: &#91;\"get\", \"list\", \"watch\"]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Network Policies<\/h3>\n\n\n\n<p>Default deny with explicit allows. Every namespace should have a baseline policy.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Default deny all ingress\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: default-deny-ingress\n  namespace: app-prod\nspec:\n  podSelector: {}\n  policyTypes:\n    - Ingress\n---\n# Allow ingress from same namespace only\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: allow-same-namespace\n  namespace: app-prod\nspec:\n  podSelector: {}\n  policyTypes:\n    - Ingress\n  ingress:\n    - from:\n        - podSelector: {}\n---\n# Allow ingress from OpenShift router\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: allow-from-router\n  namespace: app-prod\nspec:\n  podSelector:\n    matchLabels:\n      app: web-frontend\n  policyTypes:\n    - Ingress\n  ingress:\n    - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io\/policy-group: ingress\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Image Security and Supply Chain<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Image policy to restrict registries\napiVersion: config.openshift.io\/v1\nkind: Image\nmetadata:\n  name: cluster\nspec:\n  registrySources:\n    allowedRegistries:\n      - registry.redhat.io\n      - registry.access.redhat.com\n      - quay.io\n      - ghcr.io\n      - registry.internal.example.com\n    blockedRegistries:\n      - docker.io  # Block Docker Hub for compliance\n---\n# Require signed images in production\napiVersion: policy.sigstore.dev\/v1alpha1\nkind: ClusterImagePolicy\nmetadata:\n  name: require-signatures\nspec:\n  images:\n    - glob: \"registry.internal.example.com\/prod\/**\"\n  authorities:\n    - keyless:\n        url: https:\/\/fulcio.sigstore.dev\n        identities:\n          - issuer: https:\/\/accounts.google.com\n            subject: release-team@example.com\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Pod Security Standards<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Enforce restricted security context\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: app-prod\n  labels:\n    pod-security.kubernetes.io\/enforce: restricted\n    pod-security.kubernetes.io\/audit: restricted\n    pod-security.kubernetes.io\/warn: restricted\n---\n# Security Context Constraints for OpenShift\napiVersion: security.openshift.io\/v1\nkind: SecurityContextConstraints\nmetadata:\n  name: app-restricted\nallowPrivilegedContainer: false\nallowPrivilegeEscalation: false\nrequiredDropCapabilities:\n  - ALL\nrunAsUser:\n  type: MustRunAsNonRoot\nseLinuxContext:\n  type: MustRunAs\nfsGroup:\n  type: MustRunAs\nvolumes:\n  - configMap\n  - emptyDir\n  - projected\n  - secret\n  - downwardAPI\n  - persistentVolumeClaim\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Networking: Where Most Migrations Fail<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Ingress and Load Balancer Alignment<\/h3>\n\n\n\n<p>External load balancers must align with OpenShift router expectations. Health checks should target readiness endpoints, not TCP ports.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# HAProxy configuration for OpenShift routers\nfrontend openshift_router_https\n    bind *:443\n    mode tcp\n    option tcplog\n    default_backend openshift_router_https_backend\n\nbackend openshift_router_https_backend\n    mode tcp\n    balance source\n    option httpchk GET \/healthz\/ready HTTP\/1.1\\r\\nHost:\\ router-health\n    http-check expect status 200\n    server router-0 192.168.1.10:443 check port 1936 inter 5s fall 3 rise 2\n    server router-1 192.168.1.11:443 check port 1936 inter 5s fall 3 rise 2\n    server router-2 192.168.1.12:443 check port 1936 inter 5s fall 3 rise 2\n<\/code><\/pre>\n\n\n\n<p><strong>Common failure:<\/strong> Load balancer marks routers healthy while the application is unavailable. TCP health checks pass even when the router pod is terminating.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">MTU and Overlay Networking<\/h3>\n\n\n\n<p>MTU mismatches between underlay, NSX, and OpenShift overlays cause:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Intermittent pod-to-pod packet loss<\/li>\n\n\n\n<li>gRPC failures (large payloads fragment incorrectly)<\/li>\n\n\n\n<li>Random CI\/CD pipeline timeouts<\/li>\n\n\n\n<li>TLS handshake failures<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Verify MTU across the path\n# Physical network: 9000 (jumbo frames)\n# NSX overlay: 8900 (100 byte overhead)\n# OpenShift OVN: 8800 (additional 100 byte overhead)\n\n# Test from inside a pod\nkubectl exec -it debug-pod -- ping -M do -s 8772 target-service\n\n# If this fails, reduce MTU until it works\n# Then configure cluster network appropriately\n<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# OpenShift cluster network configuration\napiVersion: operator.openshift.io\/v1\nkind: Network\nmetadata:\n  name: cluster\nspec:\n  clusterNetwork:\n    - cidr: 10.128.0.0\/14\n      hostPrefix: 23\n  serviceNetwork:\n    - 172.30.0.0\/16\n  defaultNetwork:\n    type: OVNKubernetes\n    ovnKubernetesConfig:\n      mtu: 8800\n      genevePort: 6081\n<\/code><\/pre>\n\n\n\n<p>This is almost never diagnosed correctly on first pass. Symptoms look like application bugs.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">DNS Configuration for Migration<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# CoreDNS custom configuration for migration\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: dns-custom\n  namespace: openshift-dns\ndata:\n  legacy.server: |\n    legacy.example.com:53 {\n      forward . 10.0.0.53 10.0.0.54\n      cache 30\n    }\n<\/code><\/pre>\n\n\n\n<p>During migration, pods may need to resolve legacy DNS names. Configure forwarding rules before cutting over applications.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Storage: Persistent Volumes and CSI Reality<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">StorageClass Design<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Pure Storage FlashArray - Fast tier\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: pure-fast\n  annotations:\n    storageclass.kubernetes.io\/is-default-class: \"false\"\nprovisioner: pure-csi\nparameters:\n  backend: flasharray\n  csi.storage.k8s.io\/fstype: xfs\n  createoptions: -q\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\n---\n# Pure Storage FlashBlade - Shared\/NFS tier\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: pure-shared\nprovisioner: pure-csi\nparameters:\n  backend: flashblade\n  exportRules: \"*(rw,no_root_squash)\"\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\n---\n# Standard tier for non-critical workloads\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: standard\n  annotations:\n    storageclass.kubernetes.io\/is-default-class: \"true\"\nprovisioner: kubernetes.io\/vsphere-volume\nparameters:\n  diskformat: thin\n  datastore: vsanDatastore\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\n<\/code><\/pre>\n\n\n\n<p>WaitForFirstConsumer is critical for block storage. Without it, volumes are bound before pod placement, breaking topology-aware scheduling.<\/p>\n\n\n\n<p><strong>What breaks if ignored:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pods stuck in Pending state<\/li>\n\n\n\n<li>Volumes attached to unreachable nodes<\/li>\n\n\n\n<li>Zone-aware deployments fail silently<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Stateful Application Migration<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Database migration pattern using PVC cloning\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: db-data-migrated\nspec:\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: pure-fast\n  resources:\n    requests:\n      storage: 500Gi\n  dataSource:\n    kind: PersistentVolumeClaim\n    name: db-data-legacy\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Observability and Migration Validation<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Baseline Metrics Before Migration<\/h3>\n\n\n\n<p>You cannot validate a migration without knowing what normal looks like. Capture baselines for at least two weeks before migration.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Key metrics to baseline\n# Application metrics\n- request_duration_seconds (p50, p95, p99)\n- request_total (rate)\n- error_total (rate)\n- active_connections\n\n# Infrastructure metrics\n- cpu_usage_percent\n- memory_usage_bytes\n- disk_io_seconds\n- network_bytes_transmitted\n- network_bytes_received\n\n# Business metrics\n- transactions_per_second\n- successful_checkouts\n- user_sessions_active\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Prometheus Rules for Migration<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\napiVersion: monitoring.coreos.com\/v1\nkind: PrometheusRule\nmetadata:\n  name: migration-validation\n  namespace: openshift-monitoring\nspec:\n  groups:\n    - name: migration.rules\n      rules:\n        # Alert if latency increases more than 20% post-migration\n        - alert: MigrationLatencyRegression\n          expr: |\n            (\n              histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{migrated=\"true\"}&#91;5m])) by (le, service))\n              \/\n              histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{migrated=\"false\"}&#91;5m])) by (le, service))\n            ) &gt; 1.2\n          for: 10m\n          labels:\n            severity: warning\n          annotations:\n            summary: \"Latency regression detected post-migration\"\n            description: \"Service {{ $labels.service }} p95 latency increased by more than 20%\"\n\n        # Alert on error rate increase\n        - alert: MigrationErrorRateIncrease\n          expr: |\n            (\n              sum(rate(http_requests_total{status=~\"5..\", migrated=\"true\"}&#91;5m])) by (service)\n              \/\n              sum(rate(http_requests_total{migrated=\"true\"}&#91;5m])) by (service)\n            ) &gt; 0.01\n          for: 5m\n          labels:\n            severity: critical\n          annotations:\n            summary: \"Error rate exceeded 1% post-migration\"\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Grafana Dashboard for Migration<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Dashboard JSON snippet for migration comparison\n{\n  \"panels\": &#91;\n    {\n      \"title\": \"Request Latency Comparison\",\n      \"type\": \"timeseries\",\n      \"targets\": &#91;\n        {\n          \"expr\": \"histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{env='legacy'}&#91;5m])) by (le))\",\n          \"legendFormat\": \"Legacy p95\"\n        },\n        {\n          \"expr\": \"histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{env='openshift'}&#91;5m])) by (le))\",\n          \"legendFormat\": \"OpenShift p95\"\n        }\n      ]\n    },\n    {\n      \"title\": \"Error Rate Comparison\",\n      \"type\": \"stat\",\n      \"targets\": &#91;\n        {\n          \"expr\": \"sum(rate(http_requests_total{status=~'5..', env='openshift'}&#91;5m])) \/ sum(rate(http_requests_total{env='openshift'}&#91;5m]))\",\n          \"legendFormat\": \"OpenShift Error Rate\"\n        }\n      ]\n    }\n  ]\n}\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Log Aggregation for Troubleshooting<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Loki configuration for migration logs\napiVersion: loki.grafana.com\/v1\nkind: LokiStack\nmetadata:\n  name: logging-loki\n  namespace: openshift-logging\nspec:\n  size: 1x.small\n  storage:\n    schemas:\n      - version: v12\n        effectiveDate: \"2024-01-01\"\n    secret:\n      name: logging-loki-s3\n      type: s3\n  storageClassName: pure-fast\n  tenants:\n    mode: openshift-logging\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">CI\/CD and GitOps: What Actually Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Immutable Image Promotion<\/h3>\n\n\n\n<p>Do not rebuild images per environment. Build once, scan once, promote through environments.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Tekton pipeline for build-once promotion\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: build-and-promote\nspec:\n  params:\n    - name: git-revision\n      type: string\n  tasks:\n    - name: build\n      taskRef:\n        name: buildah\n      params:\n        - name: IMAGE\n          value: \"registry.internal\/app:$(params.git-revision)\"\n\n    - name: scan\n      taskRef:\n        name: trivy-scan\n      runAfter:\n        - build\n\n    - name: sign\n      taskRef:\n        name: cosign-sign\n      runAfter:\n        - scan\n\n    - name: promote-to-dev\n      taskRef:\n        name: skopeo-copy\n      runAfter:\n        - sign\n      params:\n        - name: srcImage\n          value: \"registry.internal\/app:$(params.git-revision)\"\n        - name: destImage\n          value: \"registry.internal\/app:dev\"\n<\/code><\/pre>\n\n\n\n<p>If you rebuild per environment:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Debugging becomes impossible (which build has the bug?)<\/li>\n\n\n\n<li>Security attestation is meaningless<\/li>\n\n\n\n<li>Promotion is not promotion, it is a new deployment<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">ArgoCD Application Example<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: app-prod\n  namespace: openshift-gitops\nspec:\n  project: production\n  destination:\n    namespace: app-prod\n    server: https:\/\/kubernetes.default.svc\n  source:\n    repoURL: https:\/\/github.com\/org\/app-config\n    targetRevision: main\n    path: overlays\/prod\n  syncPolicy:\n    automated:\n      prune: true\n      selfHeal: true\n    syncOptions:\n      - CreateNamespace=false\n      - PrunePropagationPolicy=foreground\n      - PruneLast=true\n  ignoreDifferences:\n    - group: apps\n      kind: Deployment\n      jsonPointers:\n        - \/spec\/replicas  # Allow HPA to control replicas\n<\/code><\/pre>\n\n\n\n<p>Self-heal is not optional in regulated or audited environments. Manual drift is operational debt that compounds.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Environment Promotion with Kustomize<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Base kustomization\n# base\/kustomization.yaml\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nresources:\n  - deployment.yaml\n  - service.yaml\n  - configmap.yaml\n\n# Production overlay\n# overlays\/prod\/kustomization.yaml\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nresources:\n  - ..\/..\/base\npatches:\n  - patch: |\n      - op: replace\n        path: \/spec\/replicas\n        value: 5\n    target:\n      kind: Deployment\n      name: app\nimages:\n  - name: app\n    newName: registry.internal\/app\n    newTag: v1.2.3  # Pinned version, updated by CI\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Rollback Strategy<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Application-Level Rollback<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# ArgoCD rollback to previous version\nargocd app history app-prod\nargocd app rollback app-prod &lt;revision&gt;\n\n# Or using kubectl\nkubectl rollout undo deployment\/app -n app-prod\nkubectl rollout status deployment\/app -n app-prod\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Traffic-Based Rollback<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# OpenShift route for blue-green deployment\napiVersion: route.openshift.io\/v1\nkind: Route\nmetadata:\n  name: app\n  namespace: app-prod\nspec:\n  to:\n    kind: Service\n    name: app-green\n    weight: 100\n  alternateBackends:\n    - kind: Service\n      name: app-blue\n      weight: 0\n---\n# To rollback, shift traffic back to blue\n# oc patch route app -p '{\"spec\":{\"to\":{\"weight\":0},\"alternateBackends\":&#91;{\"kind\":\"Service\",\"name\":\"app-blue\",\"weight\":100}]}}'\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Full Migration Rollback<\/h3>\n\n\n\n<p>For critical systems, maintain the ability to roll back the entire migration for a defined period.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Rollback checklist\nrollback_criteria:\n  - error_rate &gt; 5% for 15 minutes\n  - p99_latency &gt; 2x baseline for 30 minutes\n  - data_integrity_check_failed\n  - critical_integration_broken\n\nrollback_procedure:\n  1. Announce rollback decision\n  2. Stop writes to new system (if applicable)\n  3. Verify data sync to legacy is current\n  4. Switch DNS\/load balancer to legacy\n  5. Verify legacy system health\n  6. Communicate rollback complete\n  7. Schedule post-mortem\n\nrollback_window: 14 days  # Maintain legacy systems for 2 weeks post-migration\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Data Rollback Considerations<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Continuous data sync for rollback capability\napiVersion: batch\/v1\nkind: CronJob\nmetadata:\n  name: data-sync-to-legacy\nspec:\n  schedule: \"*\/5 * * * *\"\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          containers:\n            - name: sync\n              image: registry.internal\/data-sync:latest\n              env:\n                - name: SOURCE_DB\n                  value: \"postgresql:\/\/new-db:5432\/app\"\n                - name: TARGET_DB\n                  value: \"postgresql:\/\/legacy-db:5432\/app\"\n                - name: SYNC_MODE\n                  value: \"incremental\"\n          restartPolicy: OnFailure\n<\/code><\/pre>\n\n\n\n<p><strong>Key principle:<\/strong> Never decommission legacy systems until the rollback window has passed and stakeholders have signed off.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Migration Execution: What People Underestimate<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">State and Cutover<\/h3>\n\n\n\n<p>Databases and stateful services require parallel runs and controlled traffic switching. DNS TTLs must be reduced days in advance, not minutes.<\/p>\n\n\n\n<p>Most outages during migration are caused by:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Hidden hard-coded IPs in application configs, scripts, and cron jobs<\/li>\n\n\n\n<li>Legacy authentication dependencies (service accounts with IP-based trust)<\/li>\n\n\n\n<li>Assumed local storage paths that do not exist in containers<\/li>\n\n\n\n<li>Timezone differences between legacy VMs and containers (UTC default)<\/li>\n\n\n\n<li>Environment variables that were set manually and never documented<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Communication Plan<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Migration communication template\nstakeholders:\n  - business_owners\n  - development_teams\n  - operations\n  - security\n  - support\n\ncommunications:\n  - timing: T-14 days\n    message: \"Migration scheduled, review runbook\"\n    audience: all\n\n  - timing: T-2 days\n    message: \"DNS TTL reduced, final validation\"\n    audience: operations, development\n\n  - timing: T-0 (cutover)\n    message: \"Migration in progress, reduced SLA\"\n    audience: all\n\n  - timing: T+1 hour\n    message: \"Initial validation complete\"\n    audience: all\n\n  - timing: T+24 hours\n    message: \"Migration successful, monitoring continues\"\n    audience: all\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Operational Testing (Non-Negotiable)<\/h2>\n\n\n\n<p>Before production:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Kill a control plane node and verify automatic recovery<\/li>\n\n\n\n<li>Force etcd leader re-election during load<\/li>\n\n\n\n<li>Simulate storage controller failure<\/li>\n\n\n\n<li>Drain workers during peak load<\/li>\n\n\n\n<li>Test certificate rotation<\/li>\n\n\n\n<li>Verify backup and restore procedures<\/li>\n\n\n\n<li>Run security scan and penetration test<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# Chaos testing example with Litmus\napiVersion: litmuschaos.io\/v1alpha1\nkind: ChaosEngine\nmetadata:\n  name: control-plane-chaos\n  namespace: litmus\nspec:\n  engineState: active\n  appinfo:\n    appns: openshift-etcd\n    applabel: app=etcd\n  chaosServiceAccount: litmus-admin\n  experiments:\n    - name: pod-delete\n      spec:\n        components:\n          env:\n            - name: TOTAL_CHAOS_DURATION\n              value: \"60\"\n            - name: CHAOS_INTERVAL\n              value: \"10\"\n            - name: FORCE\n              value: \"true\"\n<\/code><\/pre>\n\n\n\n<p>If the platform team is afraid to do this, the cluster is not ready.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">In short&#8230;<\/h2>\n\n\n\n<p>OpenShift migration is not a technology project. It is an operational transformation that happens to involve technology.<\/p>\n\n\n\n<p>The patterns in this post exist because I have seen the alternatives fail. Every shortcut skipping reservations, ignoring kernel tuning, compressing testing phases creates debt that surfaces as production incidents.<\/p>\n\n\n\n<p><strong>Key principles:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Infrastructure must be deterministic before OpenShift installation<\/li>\n\n\n\n<li>Security is architecture, not an afterthought<\/li>\n\n\n\n<li>Migration strategy matters more than migration speed<\/li>\n\n\n\n<li>Observability validates success; without baselines, you are guessing<\/li>\n\n\n\n<li>Rollback capability is not optional for production systems<\/li>\n\n\n\n<li>Test failure modes before they test you<\/li>\n<\/ul>\n\n\n\n<p>The goal is not to move workloads. The goal is to move workloads without moving your problems with them and without creating new ones.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Designing and migrating to OpenShift is not about installing a cluster. It is about controlling failure domains, aligning schedulers, and avoiding hidden infrastructure bottlenecks that only surface under load or during outages. This post walks through concrete implementation patterns using Terraform and Ansible, explains why they exist, and highlights what will break if you get them wrong. Migration Strategy: Phased<a href=\"https:\/\/nicktailor.com\/tech-blog\/openshift-architecture-migration-design-building-secure-scalable-enterprise-platforms\/\" class=\"read-more\">Read More &#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[150],"tags":[],"class_list":["post-2221","post","type-post","status-publish","format-standard","hentry","category-openshift"],"_links":{"self":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2221","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/comments?post=2221"}],"version-history":[{"count":1,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2221\/revisions"}],"predecessor-version":[{"id":2222,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/posts\/2221\/revisions\/2222"}],"wp:attachment":[{"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/media?parent=2221"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/categories?post=2221"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nicktailor.com\/tech-blog\/wp-json\/wp\/v2\/tags?post=2221"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}