I used to work for a company that was building a predictive analytics system that was able to predict a person’s likelihood of being a criminal.
It was amazing to think about.
And then I learned about something called the deep learning community.
Deep learning is the technology that lets you think about the neural network, how it works, how to make it perform better.
It’s a way of thinking about data and data processing and neural networks in a very concrete way.
And I wanted to understand what it was that I was learning and how it was being applied.
I wanted the machine to understand me and my behavior.
So I wanted a deep neural network to be able to understand how I would react to a situation, how I might react to it.
And that was the goal.
I had been learning about deep learning for a long time, so I knew it had to be fast.
And my goal was to make my model run on GPUs and run fast.
So I had to get the GPUs running.
So the first step was to set up my GPU-accelerated neural network.
And to do that, I had a bunch of GPUs.
The first GPU-Accelerated Neural Network I built was for the reason you’ll hear a lot about.
The reason it was GPUs was because I needed a lot of them.
You know, the GPU has to run at all times.
I need them for data analysis.
I want them for neural network analysis.
And so it’s not that the GPUs are really the bottleneck, it’s that they’re not really the thing that’s going to be running for the most part of the day.
And the reason GPUs were important was because they were fast.
That was a big deal in AI.
GPUs are fast.
They were a lot faster than CPUs.
They are also very cheap.
They’re cheap because they can be used in many different situations.
They can be run on cheap hardware.
They’ve been designed for a very specific application.
So that’s what made GPUs powerful.
But what was really cool about GPUs was that they were easy to set-up and fast.
I had one GPU for a specific task.
And, you know, when you’re building a machine that you need to run a lot, the CPU has to be slow.
It has to take a lot more time.
The GPU has more memory.
It can be much faster.
And they’re also inexpensive.
So it made sense to me to get my GPUs set up.
And once I did that, there was no going back.
It just worked.
And I had these GPUs that were all running very similarly, but they were not.
There were a few things.
First of all, they had a different clock frequency.
So when I had the GPU-based neural network running on the same clock frequency, I was running on one GPU.
I was not running on two.
And there was also a different memory size.
So if I wanted it to run on a much larger memory bandwidth, then I had two GPUs running at the same time.
And if I did not want to use memory at all, then there was only one GPU running.
And it wasn’t really a big difference.
And those two things helped to set the two different GPUs apart.
So for a lot less than $100, I could have two GPUs in parallel running at a time.
I could run the two GPUs at the exact same clock speed and have them running at exactly the same memory bandwidth.
And these are all very common things in GPUs, and so that made it easy to scale up the neural networks and get them going faster and faster.
But I was using GPUs that are a lot slower than CPUs in parallel, which was not ideal.
And at the time, it wasn-it wasn’t clear how much better GPUs would be than CPUs for deep learning, because GPUs were slow.
And so the question was, How can we get the GPU to be faster than the CPU in a way that’s more efficient?
And I think it was just a matter of finding the right combination of things.
So, for example, I knew that the GPU could do things that CPUs could not do, that GPUs could do faster than things like parallel processing, that the CPU would have to do things like garbage collection and so forth.
And some of these things were well known in AI, but we were very surprised that GPUs were doing things that the CPUs were not doing.
So when I did the GPU training, I didn’t really understand the details of the GPU’s capabilities at the moment.
But it was clear to me that the most important thing was that the training data had to match the training.
And that’s a really important thing.
You don’t have a lot to say about the training when you don’t know the actual data.
And when you do know the data, you can make a lot better decisions about how to train.
So in a sense, I got the data.
But what was interesting was