R
Back to blog
Consulting

The ChatGPT Governance Checklist for Swiss Companies

What every Swiss company needs in place before letting employees use ChatGPT at work — data, contracts, training, and audit.

TecMinds TeamMarch 22, 20263 min read

Why Governance Matters Before Adoption

When ChatGPT entered the workplace, it didn't ask for permission. Employees pasted contracts, customer data, and source code into a public chat interface on personal accounts. Most leaders only find out when it's too late.

Governance isn't about blocking AI — it's about making it safe to use aggressively. The goal is to give your team a clear yes inside a well-defined playing field.

The 10-Point Checklist

1. Data Classification

Define what employees can and cannot paste into any AI tool. Three tiers usually work:

  • Public — marketing copy, public documentation, press releases (allowed)
  • Internal — meeting notes, drafts, internal wikis (allowed on approved tools only)
  • Restricted — customer PII, financials, trade secrets, source code (never in consumer tools)

2. Contractual Framework

Use the OpenAI API, the ChatGPT Enterprise plan, or Microsoft Copilot for Business — not personal accounts. These offer Data Processing Agreements, no training on your data, and EU/CH data residency where needed.

3. Swiss DSG Alignment

The revised Swiss Data Protection Act (revDSG) applies to AI processing just like any other data handling. Review your processing register, update your privacy notice if AI is involved in decisions affecting individuals, and document your risk assessment.

4. Written AI Policy

Two pages maximum. Who can use what, for what, with what data. Signed by every employee. Reviewed annually.

5. Approved Tools List

Publish a short list of approved AI tools with their approved use cases. Anything not on the list requires a request. This is the single most effective control.

6. Training for Employees

Not a webinar. A 90-minute practical workshop: how to write good prompts, when to trust the output, when to verify, when to stop. Run it for every team.

7. Output Verification Rules

Anything going to a customer, a regulator, or into production code must be reviewed by a human who is accountable for the result. Write this rule down.

8. Audit Trail

For API-based usage and Enterprise plans, log prompts and outputs centrally where feasible. You need to be able to answer "who used what, when, for what" six months later.

9. Vendor Review

Before approving a new AI tool, review: data processing, security certifications (ISO 27001, SOC 2), data residency, retention policy, subprocessor list. Document it.

10. Incident Response

What happens if a customer dataset leaks to a consumer AI account? Who is notified? How fast? Write it once, test it, hope you never need it.

What Not to Do

  • Don't ban AI outright. Employees will use it anyway, on personal phones, off the record.
  • Don't publish a 40-page policy. Nobody reads it. Two pages. Plain language.
  • Don't make IT the only gatekeeper. Bring legal, HR, and one business owner into the room.

Where to Start

Pick three tools your team actually uses. Write a two-page policy. Run one workshop. You can be governance-ready in two weeks — and then you can move fast with confidence.

Need help drafting the policy or running the workshop? Get in touch. We've done it for teams of ten and teams of two hundred.

#ChatGPT#Governance#Compliance#DSG#Switzerland

Share this article

Related articles

Let's work together

Have a project in mind?

Let's talk about how AI and custom software can move your business forward.

Book a free consultation