DR. ATABAK KH
Cloud Platform Modernization Architect specializing in transforming legacy systems into reliable, observable, and cost-efficient Cloud platforms.
Certified: Google Professional Cloud Architect, AWS Solutions Architect, MapR Cluster Administrator
When I was doing my PhD, I spent years building computational models for cancer detection. It sounded glamorous from the outside - algorithms, prediction, science. In reality it was a constant confrontation with risk and of course data as bigggg considration.
The data I worked with was deeply personal: gene expression profiles, clinical records, diagnostic outcomes. If my models were wrong, real people paid the price. A “good enough” model was not good enough. We had to think about:
This mindset is exactly what I miss today when I look at how many companies are “doing AI”.
Right now, the narrative is simple and brutal:
Move fast, plug in AI everywhere, and worry about the details later.
From my perspective, that’s not innovation. That’s negligence dressed up as progress.
Let’s be honest about what’s actually happening inside every companies.
Employees are under pressure to deliver more, faster. They discover a large language model that can summarise, generate, and debug in seconds ( and honestly not eveyone underestand right know what are you doing???? ). So they start doing what feels natural:
Nobody is trying to be malicious. People are just trying to survive their workday.
The problem is simple:
The moment that data leaves your controlled environment and lands in a third-party AI tool, you’ve lost control.
You don’t really know:
In my PhD world, that level of uncertainty would have killed any project immediately. In today’s corporate world, it often doesn’t even stop a pilot.
Recent studies and incidents reveal the scope:
The pattern is consistent: well-intentioned employees, no guardrails, predictable outcomes.
We can talk about ethics and principles, but let’s be direct: a lot of AI adoption is driven by greed and fear.
In this context, “AI strategy” often means:
This might “work” short term, especially in startups desperate for growth. But structurally it creates a fragile system:
You end up with a strange paradox:
Companies shout about “AI transformation” while quietly gambling with the data that keeps them alive.
There’s a dangerous myth that human safety and business interests are in tension.
From what I’ve seen on both sides - research and industry - the opposite is true:
Protecting people and protecting the company are the same problem with different time horizons.
In my PhD project, we could not hide behind “we are just experimenting”. We had to assume:
Companies deploying AI at scale should assume exactly the same.
Most leaks don’t look like a Hollywood hack. They look like normal work.
Pattern:
Why it happens:
Pattern:
Why it happens:
Pattern:
Why it happens:
Pattern:
Why it happens:
None of this is spectacular. It’s routine. That’s exactly why it’s so dangerous.

If a company genuinely wants to adopt AI without playing roulette with data and trust, it needs something more mature than “let’s try this API”.

A minimal, workable approach can be broken into five steps.
Before touching AI, be explicit:
Data Classification Levels:
Clear Rules:
If a company can’t answer “what data do we actually have and how sensitive is it?”, then it has no business deploying AI on top of it.
Not all AI features are equal. A marketing text generator and an AI system deciding who gets a loan do not belong in the same risk category.
Risk Assessment Matrix:
Define two dimensions:

You then get four quadrants:
Decision Framework:
Anything high-impact + high-sensitivity should be treated like a medical model: audited, tested, explainable, and reversible.
A secure AI setup is more than “user –> LLM API”.
You need an internal layer that:
Most companies go wrong by integrating AI at the edge (direct from browser to vendor). You want AI behind an internal gateway that you control.
People are not going to guess the right behaviour. You must be explicit.
Required Elements:
If AI is “everyone’s job” but nobody is accountable, risk will accumulate silently until something breaks.
Assume mistakes. Then build for them.
Monitoring Requirements:
Companies that pretend “nothing bad will happen” with AI are signalling that they have no realistic understanding of how technology fails in the real world.

Architecture Components:
Example Implementation (Python/FastAPI):
"""
AI Gateway Service - Secure routing and policy enforcement
"""
from fastapi import FastAPI, HTTPException, Request
from pydantic import BaseModel
from typing import Optional, Dict, Any
import logging
from datetime import datetime
# Data classification
class DataClassification:
PUBLIC = "public"
INTERNAL = "internal"
CONFIDENTIAL = "confidential"
PII = "pii"
# DLP Service
class DLPService:
"""Data Loss Prevention - detects and redacts sensitive data"""
def classify_data(self, content: str) -> str:
"""Classify data sensitivity level"""
# PII detection patterns
pii_patterns = [
r'\b\d{3}-\d{2}-\d{4}\b', # SSN
r'\b\d{16}\b', # Credit card
r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b', # Email
]
import re
for pattern in pii_patterns:
if re.search(pattern, content):
return DataClassification.PII
# Confidential keywords
confidential_keywords = ['proprietary', 'confidential', 'internal strategy']
if any(keyword in content.lower() for keyword in confidential_keywords):
return DataClassification.CONFIDENTIAL
return DataClassification.PUBLIC
def redact_sensitive(self, content: str, classification: str) -> str:
"""Redact sensitive data based on classification"""
if classification == DataClassification.PII:
# Redact PII patterns
import re
content = re.sub(r'\b\d{3}-\d{2}-\d{4}\b', '[SSN-REDACTED]', content)
content = re.sub(r'\b\d{16}\b', '[CARD-REDACTED]', content)
content = re.sub(
r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
'[EMAIL-REDACTED]',
content
)
return content
# Policy Engine
class PolicyEngine:
"""Enforces AI usage policies based on data classification"""
def __init__(self):
self.rules = {
DataClassification.PUBLIC: {
"allowed_providers": ["openai", "anthropic", "google"],
"requires_approval": False,
},
DataClassification.INTERNAL: {
"allowed_providers": ["openai-enterprise", "anthropic-enterprise"],
"requires_approval": True,
},
DataClassification.CONFIDENTIAL: {
"allowed_providers": ["private-vpc-model"],
"requires_approval": True,
"requires_audit": True,
},
DataClassification.PII: {
"allowed_providers": ["private-vpc-model"],
"requires_approval": True,
"requires_audit": True,
"requires_encryption": True,
}
}
def check_policy(self, classification: str, provider: str) -> Dict[str, Any]:
"""Check if request complies with policy"""
if classification not in self.rules:
return {"allowed": False, "reason": "Unknown classification"}
rule = self.rules[classification]
if provider not in rule["allowed_providers"]:
return {
"allowed": False,
"reason": f"Provider {provider} not allowed for {classification} data"
}
return {
"allowed": True,
"requires_approval": rule.get("requires_approval", False),
"requires_audit": rule.get("requires_audit", False),
"requires_encryption": rule.get("requires_encryption", False),
}
# Audit Logger
class AuditLogger:
"""Logs all AI interactions for compliance and incident response"""
def log_request(
self,
user_id: str,
classification: str,
provider: str,
input_size: int,
output_size: int,
timestamp: datetime,
policy_result: Dict[str, Any],
):
"""Log AI request to audit system"""
log_entry = {
"timestamp": timestamp.isoformat(),
"user_id": user_id,
"data_classification": classification,
"provider": provider,
"input_size_bytes": input_size,
"output_size_bytes": output_size,
"policy_allowed": policy_result["allowed"],
"policy_reason": policy_result.get("reason"),
}
# In production: write to secure audit log (e.g., BigQuery, CloudWatch)
logging.info(f"AI_AUDIT: {log_entry}")
# Example: Write to database or SIEM
# audit_db.insert(log_entry)
# Main Gateway Service
app = FastAPI(title="AI Gateway", version="1.0.0")
dlp_service = DLPService()
policy_engine = PolicyEngine()
audit_logger = AuditLogger()
class AIRequest(BaseModel):
content: str
provider: str = "openai"
user_id: str
class AIResponse(BaseModel):
result: str
classification: str
redacted: bool
policy_compliant: bool
@app.post("/ai/generate", response_model=AIResponse)
async def generate_ai(request: AIRequest):
"""Secure AI generation endpoint"""
# Step 1: Classify data
classification = dlp_service.classify_data(request.content)
# Step 2: Check policy
policy_result = policy_engine.check_policy(classification, request.provider)
if not policy_result["allowed"]:
raise HTTPException(
status_code=403,
detail=f"Policy violation: {policy_result['reason']}"
)
# Step 3: Redact if needed
redacted_content = request.content
redacted = False
if classification in [DataClassification.PII, DataClassification.CONFIDENTIAL]:
redacted_content = dlp_service.redact_sensitive(request.content, classification)
redacted = redacted_content != request.content
# Step 4: Route to appropriate provider
# In production: call actual AI provider API
result = f"[AI Response for {classification} data]"
# Step 5: Audit log
audit_logger.log_request(
user_id=request.user_id,
classification=classification,
provider=request.provider,
input_size=len(request.content.encode()),
output_size=len(result.encode()),
timestamp=datetime.utcnow(),
policy_result=policy_result,
)
return AIResponse(
result=result,
classification=classification,
redacted=redacted,
policy_compliant=True,
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080)
Example: Automated Classification Service
"""
Automated data classification using ML and pattern matching
"""
import re
from typing import List, Tuple
from dataclasses import dataclass
@dataclass
class ClassificationResult:
level: str
confidence: float
matched_patterns: List[str]
recommendations: List[str]
class AutoClassifier:
"""Automatically classify data sensitivity"""
def __init__(self):
self.pii_patterns = {
"ssn": r'\b\d{3}-\d{2}-\d{4}\b',
"credit_card": r'\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b',
"email": r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
"phone": r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b',
"ip_address": r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b',
}
self.confidential_keywords = [
"proprietary", "confidential", "internal use only",
"trade secret", "nda", "non-disclosure"
]
self.code_indicators = [
"def ", "function ", "class ", "import ", "package ",
"SELECT", "INSERT", "CREATE TABLE"
]
def classify(self, content: str) -> ClassificationResult:
"""Classify content and return result with confidence"""
matched = []
recommendations = []
# Check for PII
pii_found = False
for name, pattern in self.pii_patterns.items():
if re.search(pattern, content, re.IGNORECASE):
matched.append(f"PII: {name}")
pii_found = True
if pii_found:
return ClassificationResult(
level=DataClassification.PII,
confidence=0.95,
matched_patterns=matched,
recommendations=[
"Never send to public AI models",
"Use private VPC model only",
"Require encryption in transit and at rest"
]
)
# Check for confidential indicators
confidential_found = any(
keyword in content.lower() for keyword in self.confidential_keywords
)
# Check for code
code_found = any(
indicator in content for indicator in self.code_indicators
)
if confidential_found or code_found:
return ClassificationResult(
level=DataClassification.CONFIDENTIAL,
confidence=0.85,
matched_patterns=matched + (["confidential_keywords"] if confidential_found else []) + (["code_indicators"] if code_found else []),
recommendations=[
"Use private model or approved enterprise provider",
"Require manager approval",
"Enable audit logging"
]
)
# Default to internal
return ClassificationResult(
level=DataClassification.INTERNAL,
confidence=0.70,
matched_patterns=matched,
recommendations=[
"Use approved enterprise AI providers",
"Review output before use"
]
)
Example: Environment-Based Kill Switch
"""
Kill switch for disabling AI features instantly
"""
import os
from typing import Dict, Any, Callable, Optional
from enum import Enum
class KillSwitchStatus(Enum):
ENABLED = "enabled"
DISABLED = "disabled"
DEGRADED = "degraded" # Limited functionality
class KillSwitch:
"""Centralized kill switch for AI services"""
def __init__(self):
self.status = self._read_status()
self.reason = os.getenv("AI_KILL_SWITCH_REASON", "")
def _read_status(self) -> KillSwitchStatus:
"""Read kill switch status from environment"""
status_str = os.getenv("AI_ENABLED", "true").lower()
if status_str == "false" or status_str == "disabled":
return KillSwitchStatus.DISABLED
elif status_str == "degraded":
return KillSwitchStatus.DEGRADED
else:
return KillSwitchStatus.ENABLED
def is_enabled(self) -> bool:
"""Check if AI is enabled"""
return self.status == KillSwitchStatus.ENABLED
def get_fallback_response(self, service_type: str) -> Dict[str, Any]:
"""Get deterministic fallback when AI is disabled"""
fallbacks = {
"code": {
"suggestions": [],
"message": "AI code assistance is currently disabled. Please contact IT support.",
},
"document": {
"summary": "",
"message": "AI document processing is currently disabled.",
},
"chat": {
"response": "AI chat is currently unavailable. Please try again later.",
}
}
return fallbacks.get(service_type, {"message": "AI service unavailable"})
def with_kill_switch(
self,
ai_function: Callable,
service_type: str,
*args,
**kwargs
) -> Dict[str, Any]:
"""Execute AI function with kill switch protection"""
if not self.is_enabled():
return {
"result": self.get_fallback_response(service_type),
"ai_enabled": False,
"status": "fallback",
"reason": self.reason or "Kill switch activated",
}
try:
result = ai_function(*args, **kwargs)
return {
"result": result,
"ai_enabled": True,
"status": "success",
}
except Exception as e:
# On error, fall back gracefully
return {
"result": self.get_fallback_response(service_type),
"ai_enabled": True,
"status": "error",
"error": str(e),
}

Sample AI Usage Policy Structure:
This policy applies to all employees, contractors, and third parties using AI tools for company business.
If you suspect data has been leaked:
All employees must complete:
Violations may result in:
RACI Matrix for AI Governance:
| Activity | CISO | Legal | Engineering | Product | Employees |
|---|---|---|---|---|---|
| Data Classification | A | C | R | I | I |
| Policy Creation | R | A | C | C | I |
| Technical Controls | A | I | R | C | I |
| Training | C | I | I | C | R |
| Incident Response | R | A | C | I | R |
| Monitoring | A | I | R | C | I |
Legend:
Scenario: A software company’s engineers were using ChatGPT to debug code. Over several months, proprietary algorithms and architecture details were pasted into the tool.
Impact:
Root Cause:
Solution Implemented:
Lessons:
Scenario: A healthcare organization’s staff used AI tools to summarize patient notes. Patient names, conditions, and treatment plans were exposed.
Impact:
Root Cause:
Solution Implemented:
Lessons:
Scenario: A financial services company wanted to use AI for customer support but had strict regulatory requirements.
Approach:
Results:
Key Success Factors:
Working on predictive models for cancer detection taught me a few non-negotiable rules:
I see too many companies obsess over model performance and UX while ignoring the basics:
When you’re dealing with people’s lives or livelihoods, “we were experimenting” is not an excuse. It’s a confession.
| Research Context | Corporate AI Context |
|---|---|
| Patient data privacy | Customer PII protection |
| Model explainability | Regulatory compliance |
| Error cost (misdiagnosis) | Error cost (data breach) |
| Peer review | Security audit |
| IRB approval | CISO/Legal approval |
| Reproducibility | Audit trails |
If you’re serious about AI and don’t want to be the next cautionary case study, treat these as the bare minimum:
Policies without controls are theatre. You need:
Don’t treat a marketing chatbot and an AI credit scorer as the same. They are not.
Track not only adoption and revenue, but also:
If those metrics are invisible, your AI programme is fundamentally incomplete.
AI is not going away. The question is whether we force it to grow up - or we let greed and fear drive us into predictable disasters.
The choice is not “AI or safety”. The serious choice is:
AI with discipline, or AI with collateral damage.
Author: Dr. Atabak Kheirkhah
Date: November 30, 2025
Contact: atabakkheirkhah@gmail.com
This is a personal blog. The views, thoughts, and opinions expressed here are my own and do not represent, reflect, or constitute the views, policies, or positions of any employer, university, client, or organization I am associated with or have been associated with.