What is the best laptop for machine learning? See our review

Best laptop for machine learning

There was a time when I was a student when I was obsessed with more speed and more cores so that I could run my algorithms faster and longer. I changed my point of view. Big equipment still matters, but only after you've considered a lot of other factors. Let's try to select the best laptop for machine learning.

Laptop for machine learning: Hardware

The lesson is that if you're just starting out, your hardware doesn't matter. Focus on training with small datasets that fit in memory, such as those from the UCI Machine Learning Repository.

Learn good experimental design and make sure you ask the right questions and challenge your intuition by testing a variety of algorithms and interpreting your results through the lens of statistical hypothesis testing.

Once hardware starts to matter and you really need lots of cores and lots of RAM, rent it just in time for a carefully designed project or experiment.

More processor! More RAM!

I was naive when I first announced artificial intelligence and machine learning. I would take all available data and use it in my algorithms. I would rerun the models with small parameter tweets to improve the final score. I ran my models for whole days or weeks. I was obsessed

This mainly stemmed from the fact that I was interested in my machine learning skills at the competition. Obsession can be good, you can learn a lot very quickly. But if applied incorrectly, you can waste a lot of time.

I built my cars in those days. I would upgrade my CPU and RAM frequently. This was in the early 2000s, before multi-core was the clear path (for me), and even before GPUs, where there was a lot of talk about non-graphics usage (at least in my circles). I needed more and faster processors and I needed lots and lots of RAM. I even controlled the computers of my housemates so I could do more runs.

A little later, when I was in graduate school, I had access to a small cluster in the laboratory, and I began to use it. But things started to change, and it became less important how much computing power I had.

The results are wrong

The first step in my change was to discover a good (any) experimental design. I discovered statistical hypothesis testing tools that allowed me to understand whether one result was really significantly different (for example, better) than another result.

Suddenly, the fractional improvements I thought I had achieved were nothing more than statistical spikes. This was an important change. I started spending a lot more time thinking about experimental design.

The questions are wrong: laptops for machine learning

I shifted my obsessions to make sure I was asking good questions.

I now spend a lot of time uploading as many questions and question options as I can think of for a given problem. I want to make sure that when I run long computational jobs, the results I get really matter. That they are going to influence the problem.

You can see this when I highly recommend spending a lot of time defining your problem.

Intuitions are wrong when you look for the machine learning laptop

Good hypothesis testing shows how little you think you know. We'll do it for me and still do. I "knew" that this configuration of this algorithm was stable, reliable, and good. The results, when interpreted through the lens of statistical tests, quickly taught me otherwise.

It shifted my thinking to be less reliable in my old intuition and reframe my institution through the lens of statistically significant results.

Now, I don't assume I know which algorithm or which class of algorithm will do a good job of a given problem. I randomly check a diverse set and let the data guide me.

I also strongly advise you to carefully consider testing options and the use of tools such as the Weka experimenter, that test in hypothesis testing when interpreting the results.

The best is not the best

For some problems, the best results are brittle.

I used to be into optimizing non-linear functions (and related competitions) and you could spend huge amounts of computational time exploring (in retrospect, essentially enum!) search spaces and coming up with structures or configurations that were little better than finding solutions easily.

The point is that configurations are hard to find, usually very strange or exploitable bugs or quirks in the domain or simulator. These solutions were good for competition or for experimentation because the numbers were better, but not necessarily viable for use in the field or operations.

I see the same pattern in machine learning competitions. A quick and easily found solution is lower in this performance metric, but reliable.

Powered by Arfooo Directory © 2007 - 2009    Generated in 0.025 Queries: 2 - Arfooo templates download