Back to Insights
Industry
7 min read

AI Governance for Tech Companies: Product and Platform

How technology companies should structure AI governance when AI is the product — covering model cards, EU AI Act provider obligations, and downstream liability.

·Starkguard Team
Share:

You Don't Just Use AI. You Ship It.

Every industry faces AI governance challenges. Technology companies face a version that's structurally different: AI isn't a tool you adopted to improve internal operations. It's the product. It's what your customers buy, what your platform serves, what your API returns.

When a bank deploys an AI credit model, the bank bears the governance obligation. When a technology company builds that AI credit model and sells it to the bank, the governance obligation splits — and under the EU AI Act, the provider carries the heavier load.

This distinction reshapes everything about how technology companies should approach AI governance. Your governance program isn't protecting your organization from the risks of AI. It's protecting your customers, their users, and the public from the risks of your AI.

The Provider-Deployer Split Under the EU AI Act

The EU AI Act creates a clear hierarchy of responsibility. Providers — the companies that develop or place AI systems on the market — bear the primary compliance burden for high-risk AI systems. Deployers — the companies that use those systems — have obligations too, but they're lighter. Providers must implement a risk management system, ensure data governance and training data quality, create and maintain technical documentation, design for transparency with clear user information, build in human oversight mechanisms, and meet accuracy, robustness, and cybersecurity requirements.

For technology companies, this means your customers inherit obligations from the product you build. If your AI system is classified as high-risk under Annex III — creditworthiness assessment, employee screening, safety components of critical infrastructure — you carry the provider obligations regardless of how your customer deploys it.

The compliance timeline is tight. General-purpose AI (GPAI) model provider obligations entered application on August 2, 2025. High-risk AI system obligations take full effect August 2, 2026. The European Commission's enforcement powers, including fines, apply from that date. Fines for non-compliance can reach 35 million euros or 7% of global annual turnover, whichever is higher.

General-Purpose AI: The New Regulatory Category

The EU AI Act created something genuinely new in AI regulation: obligations specific to general-purpose AI model providers. If your model is trained using computational resources exceeding 10^23 floating-point operations and generates text, images, or video, it qualifies as GPAI.

GPAI providers must put in place a policy to comply with EU copyright law and make a "sufficiently detailed summary" of training data publicly available. They must maintain technical documentation about model development and provide it to the AI Office upon request. They must cooperate with the AI Office and downstream providers to enable compliance.

Models assessed as posing systemic risk — based on computational training thresholds or Commission designation — face additional obligations: adversarial testing and red-teaming, incident tracking and reporting to the AI Office, cybersecurity protections, and energy consumption reporting.

The GPAI Code of Practice, developed by independent experts and submitted to the Commission, offers a voluntary compliance pathway. Providers that adhere to the Code benefit from a presumption of compliance and may face fewer enforcement actions. But the Code is a floor, not a ceiling.

Model Cards and Transparency Reporting: From Research Practice to Regulatory Requirement

Model cards — structured documentation of an AI model's purpose, training data, performance characteristics, limitations, and ethical considerations — originated as a research best practice at Google in 2018. They've become a de facto industry standard and are rapidly becoming a regulatory expectation.

Microsoft published its second annual Responsible AI Transparency Report in mid-2025, including its Frontier Governance Framework for advanced AI models. Google, Anthropic, and OpenAI publish model cards for major releases. The practice has matured from a one-page summary to a comprehensive disclosure that covers training data composition and sourcing, evaluation methodology and benchmark results, known limitations and failure modes, bias evaluation across demographic groups, intended and out-of-scope use cases, and environmental impact including compute and energy consumption.

Here's the uncomfortable finding: transparency is declining industry-wide. Stanford's 2025 Foundation Model Transparency Index found average scores dropped from 58/100 in 2024 to 40/100 in 2025. Meta's score fell from 60 to 31. Mistral's from 55 to 18. The companies building the most capable models are disclosing less about them.

For technology companies building AI governance programs, model governance documentation should be treated as a first-class product artifact — not a post-launch compliance exercise. Your model card should be drafted during development, updated at each significant change, and reviewed as part of your release process.

Open Source Model Governance: Distribution Without Control

Open source AI introduces a governance paradox. Releasing model weights enables innovation, research reproducibility, and competitive alternatives to closed providers. It also eliminates your ability to control how the model is used, by whom, and in what context.

The EU AI Act provides a limited exemption for open-source GPAI models: providers that release model parameters under a free and open-source license are exempt from the transparency and documentation obligations — unless the model poses systemic risk. This exemption does not apply to high-risk AI system obligations. If someone integrates your open-source model into a high-risk application, the deployer bears the high-risk obligations, but you may still face provider liability if the model's design contributed to a harm.

Governance for open-source AI models should include detailed model cards and usage guidelines published alongside model weights, clear acceptable use policies with enforcement mechanisms where possible, community reporting channels for discovered vulnerabilities and misuse, version control and deprecation policies for models with known safety issues, and transparency about known limitations, failure modes, and populations underrepresented in training data.

California's SB 53, signed into law in 2025 after the higher-profile SB 1047 was vetoed, requires developers of "frontier models" to publish transparency reports about safety testing and precautions. While narrower than SB 1047's original scope, it signals the direction: transparency reporting for advanced AI models is transitioning from voluntary to mandatory.

Downstream Liability and the Supply Chain Problem

Technology companies increasingly serve as links in an AI supply chain. You build a foundation model. A customer fine-tunes it. Their customer deploys it in a regulated context. When something goes wrong three layers down the chain, liability flows upward.

The EU AI Act addresses this directly. If a provider's AI system is substantially modified by a downstream party, the downstream party may become the new provider for compliance purposes. But "substantial modification" is a factual determination, not a contractual one. Fine-tuning a model on domain-specific data, adjusting safety filters, or integrating the model into a larger system may or may not constitute substantial modification depending on the specifics.

Your contracts with customers should define the boundaries of intended use, specify which modifications transfer provider status, require downstream compliance with applicable AI Act obligations, establish incident reporting obligations that flow back to you, and preserve your ability to audit downstream deployments of your AI.

Responsible AI Programs at Scale

Scaling responsible AI from a research initiative to an operational program is where most technology companies struggle. The pattern we've seen work: embed responsible AI requirements into your existing product development lifecycle rather than creating a parallel process.

Microsoft's approach — centralizing responsible AI requirements into a workflow tool that integrates with their development process — reflects this principle. Your AI governance program should include pre-development risk assessment and use-case evaluation, data governance standards enforced in training pipelines (not just documented), red-team testing as a gate in the release process, post-deployment monitoring with defined escalation paths, and incident response procedures specific to AI failures.

Build your governance program on a recognized framework. ISO 42001 provides the management system structure. The NIST AI RMF provides the risk management methodology. The EU AI Act provides the regulatory requirements. Layer them. They're complementary, not competing.

The decision between building AI governance tooling in-house or buying a platform depends on your scale, your regulatory exposure, and how central AI governance is to your product offering. Companies with dozens of models in production across multiple jurisdictions generally find that purpose-built governance platforms pay for themselves in reduced compliance labor and audit readiness.

Technology companies set the standard for the entire AI ecosystem. When your governance is rigorous, your customers inherit that rigor. When it's superficial, the gaps propagate through every deployment. The choice is yours — and the market is watching.

Start governing your AI products with Starkguard.

Starkguard Team

AI Governance Experts

Tags:
ai-governance
technology
saas
product-development
responsible-ai

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles