When are compiled vs. interpreted languages more optimal in AI?



When are interpreted languages more optimal? When are compiled languages more optimal? What are the qualities and functions that render the so in relation to various AI methods?


Posted 2019-11-20T03:14:10.083

Reputation: 5 886


For AI in the broadest sense, I don't think the comparison is any different to general programming. If you could narrow this down to a specific task or problem, it might be possible to answer. Otherwise it seems rather broad and not well defined. See https://stackoverflow.com/questions/3265357/compiled-vs-interpreted-languages

– Neil Slater – 2019-11-20T08:31:42.077



If it's just a local home project, only you will access, then any language is good. If you are working in a team, then decide together. In general, compiled languages are good because they proof the code before execution, but scripted languages don't take the time to build your project. Most optimal would be python, since it's the language with the most AI libraries. Read More

– Harold Ed – 2019-11-20T08:47:27.770

@NeilSlater fair point—possibly I should reference in general the "Python vs. Java/C" debate. (The intent here was not to be entirely specific, but address a fundamental aspect of that debate.) – DukeZhou – 2019-11-21T01:46:39.873



Interpreted languages allow for a faster development cycle, as they don't require time for compilation, and fragments can often be run without having a complete program. They often also have fewer constraints for variable declaration or typing. That means they can be used to quickly scope out a problem and try different solutions.

The drawback is the slower execution speed. But during development this is not a big factor; it only becomes important in a production environment. So one option would be to use an interpreted language during the R&D phase, and then re-implement the algorithm in a compiled language for performance improvements.

Since ML and NNs have become more prevalent in AI, numerical computing has become more important. This is an area where interpreted languages traditionally don't perform too well, so one would use a (compiled) library for, say neural networks, or genetic algorithms, and use 'glue code' to integrate this into a bigger system. The glue code would transform/prepare data and convert this between different formats required by libraries. This is often done in interpreted scripting languages, as they might have to be changed more frequently and are not performance critical.

Apart from development, the type of computation is also key: as mentioned, numerical computing generally works better with compiled code, but interpreted languages often have advantages in symbolic programming. This is why Lisp and Prolog have become popular AI languages, as opposed to Fortran or C.

In an ideal world you would use an interpreted language for development, and then compile this once you're done. However, due to the way these languages work, compilation is often non-trivial.

Oliver Mason

Posted 2019-11-20T03:14:10.083

Reputation: 3 755

I think this answer is pretty good. It may be worth also mentioning that some interpreted languages (e.g. Python) have the ability to run high-performance compiled code directly for portions of the program where speed really matters. For example, something like PyTorch allows one to write high performance ML code in Python, by invoking lower-level libraries written in C in a way that looks like one is writing regular Python code. – John Doucette – 2019-11-21T02:06:29.680