Configuring Azure OpenAI Private Link: Keeping AI Traffic Off the Public Internet

May 2, 2026 min read

You open your CI/CD pipeline logs, and there it is: a curl call to your-resource.openai.azure.com — a public FQDN, resolving to a Microsoft-owned IP, carrying your internal service names and proprietary logic over the public internet. TLS encrypts the session body, but the endpoint itself is still exposed. Anyone watching the wire knows you are calling Azure OpenAI and roughly when.

This is not a flaw — it’s a default designed for onboarding, not production. Every Azure OpenAI resource ships with a public endpoint. Even if you add IP firewall rules, you are still routing sensitive prompts over public IP space. Private Link removes this public endpoint entirely, giving the AI service a private RFC 1918 address inside your Virtual Network.

Azure Private Link creates a Private Endpoint — a virtual network interface (NIC) with a private IP address — for your Azure OpenAI resource.

Before choosing Private Link, it helps to see what the alternatives actually give you:

Private Endpoints vs. Alternatives

FeatureService EndpointsIP Firewall RulesPrivate Link
Traffic PathAzure BackbonePublic InternetVNet Internal
Public IPExistsExists (Restricted)None (Disabled)
DNSPublic ResolutionPublic ResolutionPrivate Resolution
On-PremiseNo (VPN/ER fails)Yes (via Public IP)Yes (via Private IP)

A Service Endpoint restricts access to a VNet, but the service still has a public IP. Private Link gives it a private address (e.g., 10.0.1.5) from your own subnet. That IP is the only address the service answers on once you disable public access.

2. VNet and Subnet Design

Before you provision the endpoint, you need to prepare your network. Private Endpoints require a subnet with privateEndpointNetworkPolicies set to Disabled. This allows Network Security Groups (NSGs) to evaluate traffic destined for the Private Endpoint.

Subnet Strategy

A Private Endpoint can share a subnet with other resources, but a dedicated /28 subnet (16 IPs) for “AI-Private-Links” simplifies NSG management and prevents CIDR exhaustion.

// Define a secure subnet for AI Private Endpoints
resource peSubnet 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
  name: 'snet-ai-pe'
  parent: vnet // 'vnet' must be a reference to an existing Microsoft.Network/virtualNetworks resource
  properties: {
    addressPrefix: '10.0.1.0/28'
    privateEndpointNetworkPolicies: 'Disabled' // Mandatory for Private Link
  }
}

In a hub-and-spoke topology, place the Private Endpoint in the hub’s shared-services VNet. This allows all peered spoke VNets to reach the AI service through a single entry point, centralizing DNS and security monitoring.

Hub-and-Spoke Networking Topology

[H[U[-B--SSPS(VnOPOeDNeprKlNEteiEfST-nv(-PAaVVhl]rItNNoiieeEsnvPtTtkarDestiNPAdevSet-ae]GoEtZrHneoiHdnnRupEegubonnid-nZnpeotoTrnsirena)]tns(i1t0).0.1.5)

3. Provisioning the Private Endpoint

The most reliable way to deploy Private Link is via Bicep. You must target the account group ID for Azure Cognitive Services.

resource privateEndpoint 'Microsoft.Network/privateEndpoints@2023-04-01' = {
  name: 'pe-openai-prod'
  location: location
  properties: {
    subnet: { id: peSubnet.id }
    privateLinkServiceConnections: [
      {
        name: 'conn-openai'
        properties: {
          privateLinkServiceId: openAiResourceId // Full ARM resource ID
          groupIds: ['account'] // Required for Azure AI / OpenAI
        }
      }
    ]
  }
}

If your OpenAI resource is in a different subscription than the VNet, the connection enters a “Pending” state and requires manual approval in the Networking blade of the OpenAI resource.

4. DNS Configuration: The Split-Brain Pattern

Private Link creates the private IP, but your applications still use the FQDN your-resource.openai.azure.com. Without a DNS override, that FQDN still resolves to the public IP — and your traffic fails the moment you disable public access. The Private DNS step is where most implementations fail silently.

Split-Brain DNS Resolution

I(nnts14el..ronoQRakueluesproCyllrvieFeesQnoDttuNor(c1Ve0N..eo0tp.)e1n.a5i.com)23..A(QRz1ueu6etr8rue.yr6nD3PN.rPS1ir2viR9ave.tas1eto6el)ZvoIenPre(10.0.1.5)P(rpirviavtaeteDlNiSnkZ.oonpeenai.com)

Private DNS Zones

Azure provides a specialized Private DNS Zone: privatelink.openai.azure.com. When you link your VNet to this zone, any query for an OpenAI resource returns the private IP. The privateDnsZoneGroups resource in Bicep automates the A-record registration.

resource dnsGroup 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2023-04-01' = {
  name: 'default'
  parent: privateEndpoint
  properties: {
    privateDnsZoneConfigs: [
      {
        name: 'openai-config'
        properties: {
          privateDnsZoneId: dnsZoneId // ID of the privatelink.openai.azure.com zone
        }
      }
    ]
  }
}

For on-premises developers using VPN or ExpressRoute, configure a conditional forwarder on your local DNS server pointing openai.azure.com to an Azure DNS Private Resolver in your hub VNet.

5. Disabling Public Network Access

Once you verify the Private Endpoint, disable public access. Do not perform this step first — you will lock yourself out of the resource until the private path is functional.

Set publicNetworkAccess to Disabled. Any request not originating from your Private Endpoint is rejected with a 403 or a connection refusal.

Enforcement at Scale

Assign the built-in Azure Policy “Cognitive Services accounts should disable public network access” (ID: 0725b4dd-7e76-479c-a735-68e7ee23d5ca) in Deny mode at your Management Group level. This blocks any future AI deployments that skip Private Link.

6. Routing CI/CD Traffic

Microsoft-hosted GitHub Actions runners live on the public internet. They cannot reach your VNet. If you disable public access, those runners fail immediately.

Self-Hosted Runners

Deploy self-hosted runners inside your VNet (or a peered VNet). They inherit the VNet’s DNS and resolve the OpenAI private IP automatically. Azure Container Instances (ACI) or Azure Container Apps work well for hosting these runners in a serverless fashion within your network boundary.

# GitHub Actions snippet for self-hosted runners in a VNet
jobs:
  ai-task:
    runs-on: [self-hosted, azure-vnet]
    steps:
      - name: Call Private OpenAI
        run: |
          # This FQDN now resolves to 10.x.x.x automatically
          curl https://your-resource.openai.azure.com/status

Verification and Testing

  1. Internal Resolution: From a VM inside the VNet, run nslookup your-resource.openai.azure.com. It must return the private IP (e.g., 10.0.1.5).
  2. External Refusal: From your local machine (without VPN), run the same nslookup. It may resolve to a public IP due to the public CNAME chain, but any curl request must return a 403 error.
  3. Audit Logs: Check the “Networking” blade in the Azure portal; “Public network access” should be “Disabled.”

Best Practices

  • Centralize DNS: Maintain one privatelink.openai.azure.com zone in your Hub subscription and link it to all spokes.
  • Automate Records: Always use privateDnsZoneGroups to manage A-records. Manual records drift and cause outages during resource recreation.
  • Subnet Policies: Remember to set privateEndpointNetworkPolicies: 'Disabled' or your NSGs will not filter traffic to the AI endpoint.
  • No Ping: Azure blocks ICMP to Private Endpoint NICs. Use tcping or curl to verify connectivity.

Sources