Installation¶
numba
can be installed using the following command.
:::bash
pip3 install numba
If you need CUDA support, you have to install CUDA drivers.
:::bash
sudo apt-get install cuda-10-1
Instead of going through the hassle of configuring numba for GPU, a better way is to run numba in a Nvidia Docker container. The Docker image nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 has CUDA runtime installed, so it is as easy as installing numba on top it and you are ready to go. For more detailed instructions, please refer to .
from __future__ import division
import time
import random
data = []
data_length = 100000
ma_length = 500
test_times = 10
for i in range(data_length):
data.append(random.randint(1, 100))
import numba
@numba.jit
def ma_numba(data, ma_length):
ma = []
data_window = data[:ma_length]
test_data = data[ma_length:]
for new_tick in test_data:
data_window.pop(0)
data_window.append(new_tick)
sum_tick = 0
for tick in data_window:
sum_tick += tick
ma.append(sum_tick / ma_length)
return ma
start = time.time()
for i in range(test_times):
result = ma_numba(data, ma_length)
time_per_test = (time.time() - start) / test_times
time_per_point = time_per_test / (data_length - ma_length)
print("time_per_test:%sS" % time_per_test)
print("time_per_point:%sMS" % (time_per_point * 1000000))
Deprecation Notices¶
Reflection for List and Set types will be deprecated in Numba.
You should use numba.typed.List
and numba.typed.Set
instead.
References¶
Speed Up your Algorithms Part 2— Numba
https://github.com/numba/numba/pull/4432
https://towardsdatascience.com/speed-up-your-algorithms-part-2-numba-293e554c5cc1
https://thedatafrog.com/en/boost-python-gpu/
https://nyu-cds.github.io/python-numba/05-cuda/
https://numba.pydata.org/numba-doc/dev/user/5minguide.html?highlight=target%20cuda
https://numba.pydata.org/numba-doc/dev/reference/jit-compilation.html?highlight=target%20cuda