- Speeding up Neural Network Training With Multiple GPUs and Dask - Sep 14, 2021.
A common moment when training a neural network is when you realize the model isn’t training quickly enough on a CPU and you need to switch to using a GPU. It turns out multi-GPU model training across multiple machines is pretty easy with Dask. This blog post is about my first experiment in using multiple GPUs with Dask and the results.
Dask, GPU, Neural Networks, Training
- GPU-Powered Data Science (NOT Deep Learning) with RAPIDS - Aug 2, 2021.
How to utilize the power of your GPU for regular data science and machine learning even if you do not do a lot of deep learning work.
Data Science, GPU, Python
- Not Only for Deep Learning: How GPUs Accelerate Data Science & Data Analytics - Jul 26, 2021.
Modern AI/ML systems’ success has been critically dependent on their ability to process massive amounts of raw data in a parallel fashion using task-optimized hardware. Can we leverage the power of GPU and distributed computing for regular data processing jobs too?
Data Analytics, Data Science, Deep Learning, GPU
- How to Use NVIDIA GPU Accelerated Libraries - Jul 1, 2021.
If you are wondering how you can take advantage of NVIDIA GPU accelerated libraries for your AI projects, this guide will help answer questions and get you started on the right path.
GPU, NVIDIA, Programming
- Super Charge Python with Pandas on GPUs Using Saturn Cloud - May 12, 2021.
Saturn Cloud is a tool that allows you to have 10 hours of free GPU computing and 3 hours of Dask Cluster computing a month for free. In this tutorial, you will learn how to use these free resources to process data using Pandas on a GPU. The experiments show that Pandas is over 1,000,000% slower on a CPU as compared to running Pandas on a Dask cluster of GPUs.
Cloud, GPU, Pandas, Python
- Good-bye Big Data. Hello, Massive Data! - Oct 22, 2020.
Join the Massive Data Revolution with SQream. Shorten query times from days to hours or minutes, and speed up data preparation with - analyze the raw data directly.
Big Data, GPU, SQream
- HOSTKEY GPU Grant Program - Aug 10, 2020.
The HOSTKEY GPU Grant Program is open to specialists and professionals in the Data Science sector performing research or other projects centered on innovative uses of GPU processing and which will glean practical results in the field of Data Science, with the objective of supporting basic scientific research and prospective startups.
Data Science, GPU, Research
- PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release - Jul 2, 2020.
PyTorch Lightning, a very light-weight structure for PyTorch, recently released version 0.8.1, a major milestone. With incredible user adoption and growth, they are continuing to build tools to easily do AI research.
GPU, Metrics, Python, PyTorch, PyTorch Lightning
- A Complete guide to Google Colab for Deep Learning - Jun 16, 2020.
Google Colab is a widely popular cloud service for machine learning that features free access to GPU and TPU computing. Follow this detailed guide to help you get up and running fast to develop your next deep learning algorithms with Colab.
Deep Learning, GitHub, Google Colab, GPU, Jupyter
- Deep Learning Breakthrough: a sub-linear deep learning algorithm that does not need a GPU? - Mar 26, 2020.
Deep Learning sits at the forefront of many important advances underway in machine learning. With backpropagation being a primary training method, its computational inefficiencies require sophisticated hardware, such as GPUs. Learn about this recent breakthrough algorithmic advancement with improvements to the backpropgation calculations on a CPU that outperforms large neural network training with a GPU.
Algorithms, Deep Learning, GPU, Machine Learning
- Easily Deploy Deep Learning Models in Production - Aug 1, 2019.
Getting trained neural networks to be deployed in applications and services can pose challenges for infrastructure managers. Challenges like multiple frameworks, underutilized infrastructure and lack of standard implementations can even cause AI projects to fail. This blog explores how to navigate these challenges.
Deep Learning, Deployment, GPU, Inference, NVIDIA
- Here’s how you can accelerate your Data Science on GPU - Jul 30, 2019.
Data Scientists need computing power. Whether you’re processing a big dataset with Pandas or running some computation on a massive matrix with Numpy, you’ll need a powerful machine to get the job done in a reasonable amount of time.
Big Data, Data Science, DBSCAN, Deep Learning, GPU, NVIDIA, Python
- Mastering Fast Gradient Boosting on Google Colaboratory with free GPU - Mar 19, 2019.
CatBoost is a fast implementation of GBDT with GPU support out-of-the-box. Google Colaboratory is a very useful tool with free GPU support.
CatBoost, Google Colab, GPU, Gradient Boosting, Machine Learning, Python, Yandex
- Best Deals in Deep Learning Cloud Providers: From CPU to GPU to TPU - Nov 15, 2018.
A detailed comparison of the best places to train your deep learning model for the lowest cost and hassle, including AWS, Google, Paperspace, vast.ai, and more.
Cloud Computing, Deep Learning, GPU, TPU
- A Crash Course in MXNet Tensor Basics & Simple Automatic Differentiation - Aug 16, 2018.
This is an overview of some basic functionality of the MXNet ndarray package for creating tensor-like objects, and using the autograd package for performing automatic differentiation.
GPU, MXNet, Python, Tensor
- PyTorch Tensor Basics - May 11, 2018.
This is an introduction to PyTorch's Tensor class, which is reasonably analogous to Numpy's ndarray, and which forms the basis for building neural networks in PyTorch.
GPU, Python, PyTorch, Tensor
- Comparing Deep Learning Frameworks: A Rosetta Stone Approach - Mar 26, 2018.
A Rosetta Stone of deep-learning frameworks has been created to allow data-scientists to easily leverage their expertise from one framework to another.
Caffe, CNTK, Deep Learning, GPU, Keras, Microsoft, MXNet, PyTorch, TensorFlow
- For GPU Databases of today, the big challenge is doing JOINS - Mar 2, 2018.
While some GPU database problems have been solved, one challenge remains that only one vendor has tackled properly and that is fast SQL joins on GPU.
Brytlyt, Database, GPU, Postgres
- Fast.ai Lesson 1 on Google Colab (Free GPU) - Feb 8, 2018.
In this post, I will demonstrate how to use Google Colab for fastai. You can use GPU as a backend for free for 12 hours at a time. GPU compute for free? Are you kidding me?
Deep Learning, fast.ai, Google, Google Colab, GPU, Jupyter
- Supercharging Visualization with Apache Arrow - Jan 5, 2018.
Interactive visualization of large datasets on the web has traditionally been impractical. Apache Arrow provides a new way to exchange and visualize data at unprecedented speed and scale.
Apache Arrow, Big Data, Data Analytics, Data Visualization, Dremio, GPU, Graphistry, Open Source
- Tensorflow Tutorial, Part 2 – Getting Started - Sep 28, 2017.
This tutorial will lay a solid foundation to your understanding of Tensorflow, the leading Deep Learning platform. The second part shows how to get started, install, and build a small test case.
Deep Learning, GPU, Python, TensorFlow
- The Rise of GPU Databases - Aug 17, 2017.
The recent but noticeable shift from CPUs to GPUs is mainly due to the unique benefits they bring to sectors like AdTech, finance, telco, retail, or security/IT . We examine where GPU databases shine.
Big Data, Database, GPU, Predictive Analytics, SQL, SQream
- Deep Learning – Past, Present, and Future - May 2, 2017.
There is a lot of buzz around deep learning technology. First developed in the 1940s, deep learning was meant to simulate neural networks found in brains, but in the last decade 3 key developments have unleashed its potential.
Pages: 1 2
Andrew Ng, Big Data, Deep Learning, Geoff Hinton, Google, GPU, History, Neural Networks, NVIDIA
- Parallelism in Machine Learning: GPUs, CUDA, and Practical Applications - Nov 10, 2016.
The lack of parallel processing in machine learning tasks inhibits economy of performance, yet it may very well be worth the trouble. Read on for an introductory overview to GPU-based parallelism, the CUDA framework, and some thoughts on practical implementation.
Pages: 1 2
Algorithms, CUDA, GPU, NVIDIA, Parallelism
- Basics of GPU Computing for Data Scientists - Apr 7, 2016.
With the rise of neural network in data science, the demand for computationally extensive machines lead to GPUs. Learn how you can get started with GPUs & algorithms which could leverage them.
Algorithms, CUDA, Data Science, GPU, NVIDIA
- Popular Deep Learning Tools – a review - Jun 18, 2015.
Deep Learning is the hottest trend now in AI and Machine Learning. We review the popular software for Deep Learning, including Caffe, Cuda-convnet, Deeplearning4j, Pylearn2, Theano, and Torch.
Convolutional Neural Networks, CUDA, Deep Learning, GPU, Pylearn2, Python, Ran Bi, Theano, Torch
- Facebook Open Sources deep-learning modules for Torch - Feb 9, 2015.
We review Facebook recently released Torch module for Deep Learning, which helps researchers train large scale convolutional neural networks for image recognition, natural language processing and other AI applications.
Artificial Intelligence, Deep Learning, Facebook, GPU, Neural Networks, NYU, Ran Bi, Torch, Yann LeCun