After working in relative darkness for over a decade, open source Julia is right on time for the AI / ML code modernization party for large companies.
There’s a lot to be said for legacy codes – those that get their jobs done even when they lack flexibility and elegance. But with more scientific and technical code to be updated to make AI / ML a possibility, it becomes especially attractive to start with a cleaner slate that has the best of C or C ++ and the ease of use of Python when all differential programming and other engineering machine learning tools are built specifically for these areas of science and technology.
All of these skills are at the core of what the open source language project Julia was up to over a decade ago. The question now is whether its stability, remarkable commercial adoption, and monetization via Julia Computing will support it through its own code and business transitions as the Fortune 500 and major research centers bring code modernization up to date.
When Julia was conceived at MIT in 2009, the goal was to solve a persistent problem: the need to use two (or more) languages, one for high performance (C or C ++) and another for programming made complex systems a more pleasant experience (the Python example). While using either one could get the job done, there is inherent friction between these interfaces and processes. In addition to this basic mismatch, many of the codes in high quality science and technology are the product of decades of construction. They are inherently chaotic and rooted in codes that were state of the art in the 1980s, particularly in modeling and simulation.
Despite the clear call for weapons and solid support from MIT, Julia didn’t become an open source language project until 2012, and even then it was still a relatively small effort to produce a version 1.0 until late 2018. Some problems on the road, learning to live as an open source company, with big fixes every month that early users have to continually adjust, but things have been stable since then – and at just the right time.
Using language as a stepping stone, one of the longtime advocates of the Julia language, Keno Fischer, began looking around for the real-world problems Julia could begin to solve – not just as a stand-alone language, but as a supported platform, a self-contained ecosystem. After almost a decade of working on Julia’s low-level compiler and other important features, Fischer co-founded Julia Computing along with two other long-time Julia developers. The aim was to put Julia to the test not only as a language, but also as an optimized method of coding for pharmaceutical, financial, HPC, energy and other segments.
Over the past year or two, those efforts have paid off. Julia Computing has helped Pfizer simulate new drugs, AstraZeneca with AI-based toxicity prediction, European insurance giant Aviva with its compliance issues, utility company Fugro Roames with an AI-based system for predicting power outages, and the FAA with its prevention program of air collisions, Cisco with ML-based network security, and a number of national laboratories and academic institutions with various research programs. Julia Computing again made waves this month with a DARPA grant to bring semiconductor codes up to date for more efficient, modern simulation codes. In fact, this DARPA work shows why we’re going to hear more about Julia – the language and the company that split off.
“What makes every area mature for us, including in the semiconductor industry, is that the standard tool is based on minor improvements that have been made since computing began in the 1970s and 1980s. Someone has started to write software commercially or in the academic field, and everything that is on this old basis of software, ”says Fischer. “If you look at something like the SPICE circuit simulator in half space, now everyone has proprietary versions of this cobbled-together thing with different versions. Julia solves such problems. Here is the bilingual problem where the simulator is written in C but the scripting is all Python. However, people want to do advanced things like parameterizations and measurements to incorporate ML, but all these tools from the 80s keep bothering us. “It’s not exactly a push of a button, but with a little effort Julia can provide functions on a much more modern stack, argues Fischer.
When it comes to industry adoption, there is a gap between open source technologies that are cool but bring it to market to the point where ordinary data scientists at Fortune 50 companies start using the technology want is a big leap. For Julia, the jump was in slow motion – but now it’s accelerating.
As a company, Julia Computing, which was founded in 2015, initially raised $ 4.6 million, but has since withdrawn cash, which, according to Fischer, works well with early advice, especially for financial service users.
“Since 1.0 we’ve been on a journey to find out how this is a truly sustainable business. Advice and support is good, but it also depends on the size of the community and we are at the level where it is enough to keep the language going as we envision the big problems in the pharmaceutical industry and other areas that we do solve, ”says Fischer. “The real focus is on using this technology in certain industries now. We have the differential equation solvers and modern compiler technology that can replace the 30-40 year old Fortran code. We believe we can use these tools to help the industry and get good business without sacrificing open source efforts. “
And this is the whole point for people who have worked for years in open source obscurity on a project that may only have its day in the sun commercially. But how can you drive it on a lifeboat?
“There’s always tension when you’re trying to monetize open source. At the beginning of our trip, some VCs said it was easy to make money by holding back the last 2X and selling. We didn’t want that. We designed it to give people the tools to use to solve really tough problems, and it didn’t make sense to take that away to make money. The strategy of using the technology rather than trying to prevent the technology from making money from it is key, “says Fisher.
When asked if they are still profitable with their 40 full-time employees, almost all of whom are long-time Julia committees, Fischer says it is, but it all goes straight back to business, around the R&D goals to reach. This includes a greater effort for differential programming, especially since much of the domain-specific work is aimed at integrating ML into traditional scientific and engineering applications. In this way, Julia Computing can tap an important niche in these areas in order to fill missing knowledge with data.
On the business development side, Fischer says they are putting a lot of effort into their JuliaHub cloud platform, which includes early adopters who do big computing. The other focus is on specific areas. The semiconductor work is an example of domain-specific targeting, but Julia Computing’s real successes were actually in the pharmaceutical industry by working with Pumas.AI, a partner company that has done notable COVID work for some of the largest vaccine and drug manufacturers.
Fischer says that with codes that are decades old, large companies are finally waking up with what it takes to evolve. At the same time, the open source traction that Julia took so long to develop is finally paying off as developers choose Julia for new projects. While it is a long way to go to say the world will work entirely in the Julia language, for certain key scientific and industrial users with a bad, far-reaching case of duel languages and interfaces, the AI / ML impulse might be enough to put pressure on it exercising Julia will be in the limelight much brighter in the coming year – and probably beyond.