A data scientist’s take on Software 2.0.
Remember when software was eating the world? The trendy observation these days is that artificial intelligence (AI) is eating software. Even Google CEO Sundar Pichai has talked about software that “automatically writes itself.” And certainly if you consider software development to be little more than the creation of oft-repeated segments of code, then the rapid advances in AI would give software engineers pause.
Traditionally, developers have written software as a series of hard-coded rules: If X happens then do Y. The human instructs the machine, line by line. That’s Software 1.0. But Software 2.0 recognizes that — with advances in deep learning — we can build a neural network that learns which instructions or rules are needed for a desired outcome.
The argument made by 2.0 proponents like Andrej Karpathy, director of AI at Tesla, is that we won’t really write code anymore. We’ll just be finding data and feeding it into machine learning systems. In this scenario, we can imagine the role of software engineer morphing into “data curator” or “data enabler.” Whatever we call ourselves, we’ll be people who are no longer writing code.
“A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs or analyze their running times,” wrote Karpathy in a recent post titled Software 2.0. “They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.”
In a response to Karpathy’s post, Carlos E. Perez, author of The Deep Learning AI Playbook, writes that while “I agree with Kaparthy that teachable machines are indeed ‘Software 2.0’, what is clearly debatable is whether these new kinds of systems are different from other universal computing machinery.”
Personally, I don’t think software engineering will go away anytime soon. Even if a new role evolves — call it Software 2.0 engineer or data scientist 2.0 or whatever — there are ways in which this technology shift will empower the practitioner of Software 1.0.
In fact, I’m not sure that software engineering, in the near future at least, will be completely different from what we do now. Yes, we’ll have help from deep learning neural network systems, but they’ll help us do our current job better rather than replace us entirely.
It’s a new world, sure, but we’re not planning to live in an episode of Black Mirror. In fact, general office assistants are already scheduling your day and starting your conference calls. There are even AI-powered systems on the web that can generate a logo for your business and refine that logo based on your feedback.
Here’s another great example: this new demo shows how a deep learning network can convert a design mockup image into HTML code. It trains by looking at existing web pages and how pixels on screen are connected to HTML tags (e.g. <h1>, <div>), learning how a browser renders HTML into the image you see. Then you can feed it a new image and it will convert it into working HTML code. One of the applications of this is to dramatically shorten the prototyping cycle so a designer can just draw an interface, and a machine will create a usable web page from it.
For years, we’ve been using automated helpers to refactor and save time writing boilerplate code.
The point is that traditional software engineering will be shaped by machine learning but it won’t go extinct. But shaped how?
Today, your phone automatically checks your spelling and suggests the next word. When you’re writing code, a similar tool highlights possible errors. As someone who does pair programming for Pivotal, I’m naturally drawn to think about Software 2.0’s impact on the way I work. Considering the advances in machine learning and conversational interfaces, it’s conceivable that a machine could one day be my other half.
For years, we’ve been using automated helpers to refactor and save time writing boilerplate code. And we’re now welcoming the emergence of AI-driven assistants in more complex software development as well. Lately, they have been appearing among product teams in the form of supercharged IDE features that can suggest better code completion.
Now imagine a far more advanced AI assistant playing a much larger role in the future. As you’re writing code, your machine partner might determine what kind of function you’re writing and fill the rest in for you, based on your style, using high-level predictive analysis. Essentially the machine writes the rest of the code for you, then you approve it.
Another area an AI assistant could help with is test-driven development. A human could write the tests while the machine partner iterates millions of times to find the right piece of code to solve those tests. Instead of doing both jobs — writing the tests and making the tests pass — you’d have a machine partner that does the latter. That would be helpful. You’d spend less time on implementation code and more time on understanding and solving business problems.
Way down the line, Software 2.0 might even help guide test-driven development and suggest the next test to be run, giving you the reasons why. Let’s imagine the marketing people go to the development team and say they want such and such functionality. If they can express what they want in a way the machine can understand — which is getting easier all the time — the machine could help you choose the tests that are needed and suggest next steps.
This raises the ultimate concern: will machines just replace software engineers altogether? The reality is more likely that at best we’ll get to that more than 90% competence. But that still means failure 1% of the time, which means unpredictability. And that means you need a monitoring system to ensure that the code which is written actually works. Maybe this is a new role for software engineers, similar to what Andrej alludes to in his post: monitoring the code and helping the machine learning system achieve closer to 100% accuracy rate.
Now that we’ve outlined the conceivable benefits, the next question arises: what parts of software programming can be moved to the deep learning 2.0 framework and what should remain in the traditional 1.0 framework? Today, it’s clear that these deep learning neural networks do well in supervised learning settings, if they’re provided training data with good examples and bad examples so they can learn what to output correctly. Google, for one, is using deep learning throughout its product suite.
But those systems are only as good as the training data. And, as one of my colleagues pointed out, improving a model’s performance frequently involves improving the underlying code and deployment environment, as well as improving the training data. In fact, some machine learning systems are getting so good that they’re actually bumping up against the human-caused flaws in the training data.
The reality is that neural networks are not a silver bullet. Rather, we need to design neural networks to work with other solutions. There are certain parts of software development that will work really well with deep learning and there are other parts that won’t.
The reality is that neural networks are not a silver bullet.
If we look again at pair programming, what I’ve experienced is that there are many different ways to complete a problem with someone. Software development is a process of constant collaboration with other colleagues. Every time a new pair comes together, the partners bring different experiences and different approaches to tackling a problem. The more pairs you bring together, the more solutions you get.
With Software 2.0, we’re adding a new partner to help developers do their job better. We envision the rise of a more energetic collaborative environment that leads to ever more, and ever more effective, solutions. And that’s good for everyone.
Change is the only constant, so individuals, institutions, and businesses must be Built to Adapt. At Pivotal, we believe change should be expected, embraced, and incorporated continuously through development and innovation, because good software is never finished.
About the AuthorMore Content by Ian Huston