GitHub launches Code Brushes — a fascinating new “usable prototype” toolbox in the Copilot Labs Visual Studio Code extension. In theory, it can make your code more secure, easier to understand and more.
In practice, however, machine learning can be a cognitive crutch, causing code vulnerabilities. Use with extreme caution!
But is it art? In this week’s Secure Software Blogwatch, we miss Bob’s amazing hair.
Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: You should write malware.
Good thing, or bad?
What’s the craic? David Ramel reports — “GitHub Copilot Labs Brews ‘Code Brushes’ for ML-Powered Code Modification”:
“Can perform many different kinds of actions”
Coming from the groundbreaking GitHub Copilot "AI pair programmer" team is a new machine language-powered tool called Code Brushes. [It] aims to make code modification a more tactile process, similar to brush painting in image editing apps.
…
Copilot, of course, is the AI-powered coding assistant that has been making waves of many different kinds in the software development space since its introduction. … Code Brushes … can perform many different kinds of actions … including: Make code more readable … Fix simple bugs … Make your code more robust. … The GitHub Copilot Labs tool installs a VS Code sidebar that now has four features: code explanation, code translation, IDE Brushes and test generation.
Brush? How’d you mean? Ryan Daws explains — “GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’”:
“A brush to make a form ‘more accessible’ automatically”
Using the feature, developers can “brush” over their code to see it update in real-time. … Several different brushes are included to achieve various aims. For example, one brush makes code more readable—especially important when coding as part of a team or contributing to open-source projects.
…
Code Brushes also supports the creation of custom brushes. One example is a brush to make a form “more accessible” automatically. [It] is powered by the controversial GitHub Copilot. Copilot uses technology from OpenAI to help generate code and speed up software development.
Horse’s mouth? Amelia Wattenberger — “Code Brushes”:
“Available for anyone with a Copilot license”
Painting is a very visceral activity — you dip your paintbrush in a color and dab it onto your image. We wondered if we could make editing code feel just as tactile. … Just select a few lines, choose your brush, and see your code update.
…
Let’s say you were working on code with a function that’s hard to digest. What would it look like to “paint” that code with a brush that makes it easier to understand? … Could adding types be as easy as clicking a button? … What if it were easy to fix simple bugs, like typos? … Or if those bugs are more complex, could a brush add debugging statements for you? [Or] make any code more robust with a click.
…
The Copilot Labs brushes toolbox … is available for anyone with a Copilot license. … In the future, we’re interested in adding more … brushes, as well as letting developers store their own custom brushes.
From a security perspective, this sounds like it could be useful—for spotting bugs, avoiding type confusion and robustly filtering malicious input. But Thomas Claburn isn’t a fan — “AI assistants help developers produce code that’s … buggy”:
“Create security vulnerabilities”
Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. … Worse still, they found that AI help tends to delude developers about the quality of their output.
…
Previously, NYU researchers have shown that AI-based programming suggestions are often insecure. [They] found that given 89 scenarios, about 40 per cent of the computer programs made with the help of Copilot had potentially exploitable vulnerabilities.
…
Stanford boffins Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh … conclude that AI assistants should be viewed with caution because they can mislead inexperienced developers and create security vulnerabilities.
Neither is Chief of Nowhere — @DruidChief:
At some point someone is going to start employing people who experience is limited to Copilot and ChatGPT. Well, more work for me picking up the pieces I suppose.
But Tom Vogt finds it “interesting”:
There's quite some hubris. … Should we really assume so quickly that a well-trained AI is worse than a few junior devs?
…
My job is in security, so I look from a bugs-and-exploits perspective. … Given the code I've seen over the years … I would say that a lot of human-written code definitely has room for improvement. Sometimes quite a lot.
…
Assuming it's well-trained … an AI … should be able to at least avoid the most common issues, and possibly be much better at writing code that follows a given guideline. I would still want a senior dev to do a code review. But he should do that on junior dev written code as well, so not much of a difference.
And there are other similar tools. jerkstate notes one:
I've been experimenting with this kind of thing lately, but what I've found more useful is the "edit" mode in the Open API Playground. You paste your existing code in there, give some instruction, then submit, and it seems to do pretty well. I'm hoping that the next generation of Copilot will be able to make use of this.
What else is it good for? Here’s @coccoinomane’s favorite use case:
Cryptic code suddenly makes sense with the “List steps” brush in Copilot Labs. Great to get acquainted with legacy code!
Meanwhile, David Teren offers this neat précis:
Photoshop and Copilot had a baby!
And Finally:
Unpopular opinion: You should write malware
You have been reading Secure Software Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or ssbw@richi.uk. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.
Image sauce: Artiom Vallat (via Unsplash; leveled and cropped)
Keep learning
- Get up to speed on securing AI/ML systems and software with our Special Report. Plus: See the Webinar: The MLephant in the Room.
- Find the best building blocks for your next app with RL's Spectra Assure Community, where you can quickly search the latest safe packages on npm, PyPI and RubyGems.
- Learn how you can go beyond the SBOM with deep visibility and new controls for the software you build or buy. Learn more in our Special Report — and take a deep dive with our white paper.
- Commercial software risk is under-addressed. Get key insights with our Special Report, download the related white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.