Responsible AI pair programming with GitHub Copilot
What is GitHub Copilot, and how can it be used effectively and responsibly?
GitHub Copilot boosts developer productivity, but using it responsibly still requires good developer and DevSecOps practices.
GitHub Copilot is an AI pair programmer that uses OpenAI Codex to suggest code and even entire functions in real-time as a developer code in an integrated development editor (IDE). It boosts developer productivity but is not a replacement for good coding practices and DevSecOps processes. Developers use higher-level abstractions that make programming more accessible and productive, and AI models that assist developers are well-positioned to become ubiquitous.
Developers are trying to implement a theory or model to solve a business problem, and cognitive load is the amount of working memory needed to complete a task. There are broadly three types of cognitive load when programming: intrinsic, extraneous, and germane. GitHub Copilot helps reduce the amount of irrelevant working memory so developers can focus more on the pertinent business problem. It keeps the developer focused on the business problem rather than the programming itself. It helps reduce context switching, which is proven to impede focused and productive work.
However, GitHub Copilot is not meant to replace developers, nor is it intended to replace good practices and processes for scanning, testing, and validating code. Developers should use it effectively and responsibly, understanding its limitations and verifying its solutions are correct and secure. GitHub Copilot can be used in the inner loop of the development process, where developers code, run, and debug their code locally and peer code review, typically using a pull request. DevSecOps environments can provide security tools like GitHub to ensure the code is secure and meets compliance requirements.
The original article is “Responsible AI pair programming with GitHub Copilot.“