I’ve been programming for decades, though usually for myself, not as a profession. My current go-to language is Python, but I’m thinking of learning either Swift (I’m currently on the Apple ecosystem), or Rust. Which one do you think will be the best in terms of machine learning support in a couple of years and how easy is it to build MacOS/ iOS apps on Rust?
Rust is the only language I know of that is actively being used at the kernel level all the way through to the web app level. Compare that with Swift which is not only mostly tied to a single ecosystem, but even the “cross platform” stuff like libdispatch is littered with code like:
if #available(macOS 10.12, iOS 10.0, tvOS 10.0, watchOS 3.0, *)
Note libdispatch runs on older versions of Apple Platforms than those version numbers. The backwards compatible code paths aren’t just for other operating systems - that’s how it works on older Apple platforms too.
deleted by creator
Swift has little to no use outside the apple ecosystem, and even if you are currently using Apple, you have to consider your targets as well. Writing in Swift means your code will only be usable by other Apple users, which is canonically a rather small fraction of technology users. Rust on the other hand is multiplatform and super low level, there’s very few other languages out there that can match the potential of applications of rust code. Thus you will, in time, be introduced to many other technologies as well, like AI/ML, low level programming, web, integrations between languages, IoT, those are only a few of all the possibilities. On the other hand, even if Swift has a much more mature ecosystem, it’s still only good for creating UIs in all things Apple, which is pretty telling; Apple is not willing to put in the time and effort to open it’s language to other fields, because it sees no value in them being the ones providing the tooling for other purposes. They pretty much only want people to code web apps for them, and Swift delivers just fine for that. So if your current purpose is making Apple UIs, you could learn Swift, but be warned that either you’ll either be doing that your whole life or will eventually be forced to change languages again.
Then again, most languages nowadays aren’t that different from each other. I can code in a truckload of languages, not because I actually spent time making something coherent and complete with each one of them, but because I know some underlying concepts that all programming languages follow, like OOP, or functional programming, and whatever those entail. If you learn those you will not be afraid to switch languages on a whim, because you’ll know you can get familiar with any of them within a day.
Just a nit: swift is opensource and there is a swift ecosystem outside of apple UI things. Here’s a swift http server that you can totally run on linux.
Don’t get me wrong, Swift is OSS and there are things you can do with it apart from front-end dev, but there are usually better options out there for those other things. For example if I want an HTTP server, I’d choose JS, Kotlin, Rust, etc.
For example if I want an HTTP server, I’d choose JS, Kotlin, Rust, etc.
I wouldn’t. Swift is definitely better than any of those choices… and I say that as someone with decades of experience writing HTTP services.
I don’t currently use Swift for any of my HTTP servers - but only because it’s a relatively immature for that task and I’m generally a late adopter (also, I work in an industry where bugs are painfully expensive). But I do use Swift client side, and I definitely intend to switch over to Swift for my server side work at some point in the near future and it’s what I recommend for someone starting out today.
By far - my favourite feature in Swift is the memory manager. It uses an “Automatic Reference Counter” which is essentially old school C or Assembly style memory management… except the compiler writes all of the memory management code for you. This often results in your code using significantly less RAM and better sustained performance than other languages and it’s also just plain easier to work with - as an experienced developer I can look at Swift and know what it’s going to do at a low level with the memory. In modern garbage collected languages, even though I have plenty of experience with those, I don’t really know what it’s doing under the hood and often I’m surprised by how much memory it uses. On server side code, memory is expensive and traffic can burst to levels drastically higher than your typical baseload activity levels, using less memory and using predictable amounts of memory is really really nice.
At one point, years ago, Apple had a compiler flag to use Garbage Collection or Automatic Reference Counting. The Garbage Collector worked just as well as in any other language… but there was no situation, ever, where it worked better than ARC so Apple killed their GC implementation. ARC is awesome and I don’t understand why it’s uniquely an Apple thing. Now that Swift is open source, it’s available everywhere. Yay.
I find compared to every other language I’ve ever used, with Swift I tend to catch mistakes while writing the code instead of while testing the code, because the language has been carefully designed to ensure as many common mistakes are compile time errors or at least warnings which require an extra step (often just a single operator) to tell the compiler that, yes, you did intend to write it like that.
That feature is especially beneficial to an inexperienced developer like OP.
The other thing I love about swift is how flexible it is. For example, compare these two blocks of code - they basically do the same thing and they are both Swift:
class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Create text field let textField = UITextField(frame: CGRect(x: 20, y: 100, width: 300, height: 40)) textField.placeholder = "Enter text" textField.borderStyle = .roundedRect view.addSubview(textField) // Create button let button = UIButton(frame: CGRect(x: 20, y: 200, width: 300, height: 50)) button.setTitle("Tap Me", for: .normal) button.backgroundColor = .blue button.addTarget(self, action: #selector(buttonTapped), for: .touchUpInside) view.addSubview(button) } }
struct ContentView: View { @State private var text = "" var body: some View { VStack(spacing: 20) { // Text Field TextField("Enter text", text: $text) .padding() .textFieldStyle(RoundedBorderTextFieldStyle()) // Button Button("Tap Me") { print("Button was tapped!") } .padding() .background(Color.blue) .foregroundColor(.white) .cornerRadius(8) } .padding() } }
I’m not a performance expert by any means, but…it seems like the bit about there being “no situation, ever” in which a garbage collector that “worked just as well as in any other language” outperformed reference-counting GC. The things I’ve read about garbage collection generally indicate that a well-tuned garbage collector can be fast but nondeterministic, whereas reference-counting is deterministic but generally not faster on average. If Apple never invested significant resources in its GC, is it possible it just never performed as well as D’s, Java’s, or Go’s?
Check out this interview with Chris Lattner — one of the world’s best compiler engineers and the founder of not only the Swift language but also LLVM which backs many other languages (including Rust). It’s a high level and easily understood discussion (you don’t need to be a language expert) but it also goes into quite a few technical details.
https://atp.fm/205-chris-lattner-interview-transcript#gc
Chris briefly talks about the problems in the Apple GC implementation, but quickly moves onto comparing ARC to the best GC implementations in other languages. The fact is they could have easily fixed the flaws in their GC implementation but there just wasn’t any reason to. ARC is clearly better.
Apple’s GC and ARC implementations were both implemented at about the same time, and when ARC was immature there were situations where GC worked better. But as ARC matured those advantages vanished.
Note: that interview is six years old now - when Swift was a brand new language. They’ve don a ton of work on ARC since then and made it even better than it was, while GC was already mature and about as good as it’s ever going to et at the time. The reality is garbage collection just doesn’t work well for a lot of situations, which is why low level languages (like Rust) don’t have a “proper” garbage collector. Arc doesn’t have those limitations. The worst possible scenario is every now and then you need to give the compiler a hints to tell it to do something other than the default - but even that is rare.
Thanks for sharing the interview with Lattner; that was quite interesting.
I agree with everything he said. However, I think you’re either misinterpreting or glossing over the actual performance question. Lattner said:
The performance side of things I think is still up in the air because ARC certainly does introduce overhead. Some of that’s unavoidable, at least without lots of annotations in your code, but also I think that ARC is not done yet. A ton of energy’s been poured into research for garbage collection… That work really hasn’t been done for ARC yet, so really, I think there’s still a a big future ahead.
That’s optimistic, but certainly not the same as saying there are no scenarios in which GC has performance wins.
Swift only treats Apple OSes as first class citizens - even though technically you can use it on other platforms it’s a painful and limited experience.
Rust on the other hand is multiplatform and super low level
Not to nitpick here, (I agree with pretty much everything you said) but I wouldn’t go around calling Rust super low level as it is garbage collected. The borrow checker acts as a abstraction over the actual malloc and free calls that are happening under the hood.
I think you don’t know what garbage collection is. Allocations and Deallocations is how the heap works in memory, and is one of the two main structures in it, the stack being the other one. No matter what language you are using, you cannot escape the heap, except if you don’t use a modern multitasking OS. ARC is a type of garbage collection that decides when to free a reference after it is allocated (malloc), by counting how many places refer to it. When it reaches 0, it frees the memory (free). With ARC you don’t know when a reference will be freed on compile time.
In Rust, the compiler makes sure, using the Borrow checker, that there is only one place in your entire program where a reference can be freed, so that it can insert the free call at that place AT COMPILE TIME. That way, when the program runs there is no need for a garbage collection scheme or algorithm to take care of freeing up unused resources in the heap. Maybe you thought the borrow checker runs at compile time, taking care of your references, but that’s not the case, the borrow checker is a static analysis phase in the Rust compiler (rustc). If you want to use a runtime borrow checker, it exists, it’s called RefCell, but it’s not endorsed to use. Plus, when you use RefCell, you also usually use Reference Counting (Rc RefCell)
Perhaps garbage collection is the wrong term to use as it dosen’t happen at runtime (I wasn’t sure what other term to call what Rust does). But Rust does provide a abstraction over manual manual memory management and if you are experienced with Rust sure you can probably visualize where the compiler would put the malloc and free calls so it is kind of a mix where you do technically have control it is just hidden from you.
Edit: It seems the term is just compile-time garbage collection so maybe you could consider it falling under garbage collection as an umbrella term.
Isn’t that basically the same as how C++ RAII works?
Essentially although there are a few key differences:
- In Rust there always only one owner while in C++ you can leak ownership if you are using shared_ptr.
- In Rust you can borrow references you do not own safely and in C++ there is no gurantee a unique_ptr can be shared safely.
- In Rust, A lot more compile time optimization for the borrow checker is available whereas in C++ the type system dosen’t always let the compiler know for sure when an object goes out of scope, is moved, or is destroyed and so you miss out on a lot of optimization that would be trivial with Rust like syntax.
You raised an issue that the other bulletpoint has the solution for, I really don’t see how these are “key differences”.
In Rust there always only one owner while in C++ you can leak ownership if you are using shared_ptr.
That’s what unique_ptr would be for. If you don’t want to leak ownership, unique pointer is exactly what you are looking for.
In Rust you can borrow references you do not own safely and in C++ there is no gurantee a unique_ptr can be shared safely.
Well yeah, because that’s what shared_ptr is for. If you need to borrow references, then it’s a shared lifetime. If the code doesn’t participate in lifetime, then ofcourse you can pass a reference safely even to whatever a unique_ptr points to.
The last bulletpoint, sure that’s a key difference, but it’s partially incorrect. I deal with performance (as well as write Rust code professionally), this set of optimizations isn’t so impactful in an average large codebase. There’s no magical optimization that can be done to improve how fast objects get destroyed, but what you can optimize is aliasing issues, which languages like C++ and C have issues with (which is why vendor specific keywords like
__restrict
exists). This can have profound impact in very small segments of your codebase, though the average programmer is rarely ever going to run into that case.
Pretty much, with some atomic additions like “you cannot mutate a reference when it is borrowed immutably elsewhere” or “you cannot borrow a reference mutably multiple times”.
deleted by creator
I think rust is good for learning some low level concepts, especially coming from python.
I don’t think Python is going anywhere in the ML space though.
Agree. I’m kinda looking for marketable skills though and I feel Python may be becoming saturated.
A programming language itself isn’t a marketable skill!
Learn the underlying concepts of programming and how computers work and you’ll be able to move from language/framework to pretty much any language/framework easily.
Language absolutely is a marketable skill because most companies are looking to hire someone who can start working day one not someone they’ll have to train for weeks or even months in a new language that heavily relies on some specific framework.
I have to disagree. I’ve been conducting interviews for a fairly large software shop (~2000 engineers) for about 3 years now and, unless I’m doing an intern or very entry level interview, I don’t care what language they use (both personally and from a company interviewer policy), as long as they can show me they understand the principles behind the interview question (usually the design of a small file system or web app)
Most devs with a good understanding of underlying principles will be able to start working on meaningful tasks in a number of days.
It’s the candidates who spent their time deep diving into a specific tool or framework (like leaving a rails/react boot camp or something) that have the hardest time adjusting to new tools.
Plus when your language/framework falls out of favor, you’re left without much recourse.
Other than having first class support on Apple’s hardware Swift dosen’t have much going for it. There is no killer feature in Swift, it dosen’t widespread features and it only has a small niche. If you want to develop for mainly Apple devices I would say go for it as that is the niche it was designed for. Although I see from your post you want to do ML, Python for the high level stuff + C++ for the low level stuff is probably your best pick for that. May I ask what type of ML are you going for? Are you mainly using libraries like Tensorflow, Pytorch etc… or are you into the nitty gritty of building these things yourself and writing the required code for the matrix math and training algorithms.
Swift is a nice language though.
But I’m obviously on team Rust^^ for various reasons (one being that you can do the whole stack in Rust (not that it’s necessarily the best choice for each level, but it really composes well and with a little bit of trait-magic abstraction in the higher levels it works quite well IME)
For ML, python yes, certainly for high-level stuff at least currently. I wouldn’t be so sure in the future about the lower stack though, Rust seems to gain momentum there as well (potentially replacing use-cases where currently python is dominant too).
I think ML is probably going to require a lot of people in the future and I’m looking to build a digital nomad skill set for the future that pays well. While I’ve done a postgrad subject on ML and have a STEM degree, but I’m inclined to use existing libraries as that’s just easier.
There’s a recent Rust ML framework called “burn”. So maybe there’s also a future for ML in Rust for you.
If you want to train your neural nets you can maybe check out: https://github.com/rust-ml/linfa https://github.com/param087/swiftML (Rust seems to have more active support in terms of libraries)
If you want to integrate ML into an IOS/MacOs app: https://developer.apple.com/documentation/coreml
For userland apps Swift would be better and for training or just being generally being more useful in the future go for Rust.
At the end of the day just choose the language that is more enjoyable for you.
Sensational answer! Thank you.
I think Python is still unmatched when it comes to ML, and nothing can beat Swift in terms of Apple ecosystem support. Why not learn both, though? I find Swift a bit harder to reason with than rust, but both have merit (and both have interesting use cases). Just see what uses you will find for them as you progress.
I was working on the assumption that it would make it harder to learn two at once. Maybe you are right though.
Honestly - now that you know one language learning any new language is a pretty simple task. For example - here’s a hello world in the three languages:
# Python print("Hello, World!") // Swift print("Hello, World!") // Rust fn main() { println!("Hello, World!"); }
As you can see, the differences between Swift and Python are pretty minimal* and while rust adds a whole bunch of extra busywork (you need a function, you need an explanation point, you need a semicolon…) it’s generally the same thing.
(*) While that comparison of Python/Swift only differs in the comments, Swift is generally a much more complex language than Python, so you will need to learn a bunch of new concepts. For example if you needed to manually specify the output string encoding you’d write the Swift hello world like this:
let string = "Hello, World!" if let data = string.data(using: .utf16) { print(data) }
There are some common Swift language patterns there that are rare in other languages:
if let
will gracefully handle any errors that occur in the encoding step (there can’t be any errors when you’re using utf16 encoding, but if another encoding format was specified it might fail if, for example, you gave it an emoji.- Swift allows you to interleave part of your function names in between the function arguments. That’s not a
data()
function, the function name isdata(using:)
and there are other function names that start withdata()
but accept totally different arguments, for example you might give it a URL and it would download the contents of the URL as the contents of the data. - the
.utf16
syntax is also something I haven’t seen elsewhere. Theusing
parameter only acceptsString.Encoding.something
and you can shortcut that by only writing the.something
part.
For completeness, in python and rust you would do:
# python string = "Hello, World!" utf16_data = string.encode("utf-16") print(utf16_data) # rust fn main() { let string = "Hello, World!"; let utf16_data: Vec< u16 > = string.encode_utf16().collect(); println!("{:?}", utf16_data); }
That’s actually pretty good comparison of the three languages and an example of why I like Swift.
The syntax in Rust is absurdly complicated for such a simple task. And while the Python code is very simple, it doesn’t handle potential encoding errors as gracefully as Swift, and it also uses a string to specify the encoding, which opens up potential mistakes if you make a simple typo an also you’ll have to do a Google search to check - is it “utf-16” or “utf16”? With Swift the correct encoding string will auto-complete and it will fail to compile if you make a mistake.
Python actually isn’t my first language, just my current choice. I’ve programmed in Basic, Pascal, Fortran, PL-SQL, Prolog and C at various times in the past. My question was more about which is likely to scale over time to be the more popular ML language.
Python also sucks for MacOS gui apps, so I was contemplating building MacOS/iOs apps for myself as a side quest.
Purely from the standpoint of making GUI apps in macOS/iOS, Swift is almost certainly the best choice. All of Apple’s UI frameworks are written in Swift (technically often Objective-C, but with Swift in mind), and designed to be used from Swift. It’s kind of possible to do this in C++ using Objective-C++, but nearly all of the UI code is going to be Objective-C anyways (Objective-C is the language that used to be the default on Apple platforms, but was replaced by Swift). It’s also certainly possible to use libraries for other languages that wrap this functionality, but these often can be missing features and/or be harder to work with. Additionally when looking for help, you’re likely to find much more support out there for the native frameworks since that’s what most people are using.
OK - well at the end of the day the right approach is to have a problem you’re trying to achieve and pick the best language for that (wether you know the language or not).
If it’s MacOS/iOS apps, then definitely don’t choose Rust. But reconsider that choice for your next project.
Also, with modern large language models, it’s even easier to work with an unfamiliar language. And honestly it wasn’t ever all that difficult.
The tricky part isn’t the syntax, it’s the domain knowledge. Well, actually it’s syntax, too. Swift has a whole lot of things that aren’t like anything else with sprinkles of Objective-C. Rust turns the common patterns upside down because they make borrow checker sad. But, in the end, what makes you a good engineer is knowing how to apply the tool to solve the problem and that goes well beyond syntax.
Programming languages are like different kinds of saws: all of them are made to cut things, but there are nuances. Some are replaceable, others can be used for one specific thing. Knowing how to operate a hacksaw gives you some idea how a chainsaw would work even though they are fundamentally different. But tinkle it this way: what are you trying to do? Answering that will tell you which saw you need to use.
I’m trying to work out which one will have better for ML in a couple of years time.
I don’think rust has any specific features that target ML. Swift does, but it’s Apple hardware only.
One of the things that I’m struggling with on Python is the very poor support for AMD GPU’s, which are in Macs. I’m sure Swift will do a better job of using the hardware capabilities better.
Only old Macs have AMD GPUs.
If you’re looking for the best utilization of your hardware, I wouldn’t be surprised if Apple’s ML frameworks were best. Since Apple has a small set of hardware, they can write software that takes advantage of it better. Consider looking into Core ML which is by Apple and for Swift. Of course this will only work for Apple hardware, but if this is just for personal interest/hobby then that doesn’t really matter.
If you’re trying to prepare for a couple of years in advance, it might be worth spending a day playing with each language just to see which one feels best to you. Both languages should be able to do anything you want but some things will probably be more difficult in one or the other. I’ve never used swift, but I know rust can have a rather steep learning curve. That may be deterrent enough for some people, but that’s up to you to decide if that struggle is worth it.
Thanks, this makes some sense. I’ve started a few tutorials for Swift, and I added the Rust plugin/module to Visual Studio Code, but neither felt intuitive to me.
That doesn’t surprise me too much. They’re both a good bit different than python. It’s okay to take a little more time with each of them. Maybe try building one simple thing in both for more of a 1-1 comparison.
deleted by creator
If you don’t have a Mac I don’t think you can get the MacOS SDK.
So in that case I’d recommend Rust. I still think most of Rust’s tools/frameworks need more time in the oven but Rust is massive and has tools being built for everything. If you want Mobile I’d recommend you take a look at Dioxus or Tauri. There are probably others as well but I don’t know them it’s been a while since I’ve looked.
@Bluetreefrog
I, like you, code for myself not others and not professionally. Take a dive into Xcode and Swift if you’re in the Apple world. It is just stupid easy to throw together an app or tool in no time at all.Have you played with the Swift ML frameworks at all?
Pascal
Lol, Turbo Pascal was the first OO language I learned, back before there was any such thing as an Internet… Showing my age now.
Julia
I have thought about Julia.
What are your thoughts on it?
Julia looks like it is pointed towards ML programming and is fast, but I don’t see the same level of potential in a few years that Rust and Swift seem to have.
Rust seems to generating a lot more buzz and I’ve been seeing posts about Swifts ML libraries that look interesting. My crystal ball seems to be saying that Rust will follow a similar arc that Python took and gain some serious ML creds through libraries built by community/industry. I think Swift will also gain some credible ML capabilities too because it has the Apple behemoth behind it.
Something to consider as well is learning both. Swift is certainly the best choice for making macOS/iOS GUIs. Other languages are probably better than Swift for your ML needs (could be rust, Python, etc.). However it’s totally possible to have an app using multiple languages. You could have the UI portion be in Swift, but the ML portions be in another language.
At my company we have a Mac app with the GUI written in Swift, shared logic with our Windows app written in C++, and some libraries written in Rust. So it’s certainly possible.
One caveat is that some languages don’t work with each other very well. Swift and Python do work well together iirc, so doing UI code in Swift and ML code in Python may not be a bad idea.
If you want to just stick to Swift, Apple does have some ML frameworks for Swift that you can use. I don’t do any work with ML, so I have no idea if these frameworks are any good, or have good resources for learning.
If you want to just stick with whatever language you use for ML, there are GUI libraries in nearly every language. These certainly won’t be as robust or as nice to work with as the native frameworks in Swift, but they could probably get the job done. I do know that a major issue with GUIs in Python is the difficulty in multi threading, which is a must for any app that performs long tasks without the UI freezing.
Just learn whatever you currently need. If you know a few paradigms, learning a new language of the same paradigm is easy-peasy and can be done rather quickly (well at least being be productive with it, doing stuff idiomatically often takes a little bit longer).
That said, Rust IMO is a language that makes sense to learn anyway, since it also teaches you to program in a nicer way (not just true for Rust, there are other languages that have this effect as well, such as Haskell etc. generally languages that introduce something really new (i.e. a new paradigm)). Generally it makes sense to learn multiple languages, as each brings you new ideas. But on the other hand it makes sense to learn one language really well (I’d recommend that being Rust, as it can cover so many use-cases and is generally designed nicely (it fills a sweet spot between mutability and functional programming IMHO).