According to a recent announcement from CD Projekt, Cyberpunk 2077 development is back in full swing. Better yet, they have more people on it than they did The Witcher 3. This is a game that has been accepted by sci-fi fans everywhere as vaporware, and they’re bringing it back three years after the teaser trailer went up without further explanation. Between this, the release of mankind divided, the system shock remake, and increasing interest in the genre, everything seems to be coming up Cyberpunk. I’ve been a long-time fan of the genre, and this may be the year of class-based arena shooters, but the next few should deliver that particular brand of gritty science fiction that has been so lacking in games for the past decade.
Part of what makes Cyberpunk so endearing is that it deals with real-world problems – the ones likely to linger even into the distant future. Many works that fall under the label deal with human failings and assume the progression of technology will solve very little on a macro scale, and that brings those works closer to reality than most classical sci-fi works. It becomes much easier to attempt social commentary when you aren’t working with stilted super-humans that have their every need taken care of by scientific achievements that are functionally magical.
As time goes on, we’re seeing more and more eerie similarities between reality and older cyberpunk fiction. The first true brain implant is entering human trials this year – we have robots being trained to do things like herd cattle or aid in fire rescue, and even have a long-running television series dedicated to showcasing robot fights. Machine learning has progressed more in the past five or six years than ever before, and we’re casually training computers to do things like recognize criminals, identify art styles, or hallucinate. Speaking of which:
It’s amazing what you can do with a few hours and a little tinkering in python. The Cyberpunk 2077 announcement inspired me to train a neural network of my own, and I did so using the works of Jean Giraud (AKA Moebius, a famous science fiction and comic artist) as a template. It works by compressing images and trying to recognize familiar structures (in this case, sci-fi comic art) in them. It’s the same way Google’s Deep Dream works, only the results are more variable and aren’t any ‘hashtag puppyslugs’ in the finished product. The result is psychedelic noise in most cases, but If you tune it just right, you can get some very interesting results:
This may not seem that useful or groundbreaking, and it isn’t. There are neural network trainers trying to do the same thing with music, ones that are helping robots learn without human intervention, ones that are designing circuits with FPGAs that work well but defy human design best practices and principals. It’s a revolution in the way we interact with technology, and it’s cyberpunk as fuck. With a neural network, a few lines of code, and a free compositing tool called Natron, I was able to turn a five second morph animation loop into a visualization that reacts to music:
To put this all in perspective, I’m not a programmer, I don’t work in computers for a living, and I’ve never taken a Comp-Sci class in my life. Think about this for a minute: if a writer can teach his computer to hallucinate in a few hours, imagine places the scientific community is taking this type of programming, and imagine what we’ll be able to do in a decade or so down the road. Our children might consider programming a form of basic literacy. They might see prostheses and medical implants with electronic minds of their own become commonplace. We could be the first generation to enjoy consumer robot servants that actually work. The possibilities are as staggering as they are awesome. We may not be headed for a cyberpunk future, but it looks more and more like we are each year to me, and I, for one, can’t wait to get there.