As a technical consultant advising clients on AI solutions, the implications of the new blueprint on governing AI proposed by Microsoft are significant. While bringing exciting new capabilities, AI also creates new responsibilities. As trusted advisors to clients, it’s important we stay up-to-date on the evolving laws, regulations and best practices around responsible AI development.
Several parts of the blueprint stand out as being particularly relevant:
- The implementation of standard AI safety frameworks will require us to get smart on frameworks like the NIST AI Risk Management Framework. We’ll need to understand the requirements and attestation processes in order to guide clients on compliance.
- For clients operating in critical infrastructure like energy, new safety brake requirements will be an important consideration in solution design. We’ll need to advise clients on risk assessment processes and building human oversight controls into AI systems. Proposed licensing of AI data centers adds another layer we’ll need to keep in mind.
- As regulations develop that differentiate obligations by role in the AI tech stack, we’ll need to map client scenarios accordingly. For powerful AI models, we may need to help clients navigate licensing and required safety reviews by new regulators. There may also be knock-on effects to how clients utilise AI cloud infrastructure.
- Expectations around transparency, like AI impact reports, will create new advisor responsibilities. We can help clients implement ethical AI practices that instill public trust and fulfill regulatory requirements. Guiding clients to provide meaningful transparency into AI systems should become a bigger part of our work.
- AI-powered initiatives to address major societal issues will create new opportunities for clients and demand for our expertise. We can connect clients to emerging public-private ventures where AI can drive progress on challenges like environmental sustainability.
This blueprint underscores how essential it is that we continue developing our own knowledge of responsible and ethical AI design patterns. We must help clients navigate rapidly evolving expectations. By steering clients to develop AI responsibly and beneficially from the start, we can play a big role in realising AI’s promise while mitigating risks. Trust in AI depends in large part on the guidance of advisors like us.