-
Updated
Nov 25, 2021 - Makefile
CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 3,185 public repositories matching this topic...
-
Updated
Nov 23, 2021 - Shell
usually, after trained model. i save model in cpp format with code:
cat_model.save_model('a', format="cpp")
cat_model.save_model('b', format="cpp")
but when my cpp need to use multi models.
in my main.cpp
#include "a.hpp"
#include "b.hpp"
int main() {
// do something
double a_pv = ApplyCatboostModel({1.2, 2.3}); // i want to a.hpp's model here
double b_pv
-
Updated
Nov 28, 2021 - C++
-
Updated
Jun 10, 2021 - Python
Implement GPU version of numpy.* functions in cupy.* namespace.
This is a tracker issue that lists the remaining numpy.* APIs (see also: comparison table). I've categorized them based on difficulty so that new contributors can pick the right task. Your contribution is highly welcomed and appreciated!
List of A
-
Updated
Nov 10, 2021 - Go
For pandas API compatibility, we can implement Series.autocorr. autocorr calculates the Pearson correlation between the Series and itself lagged by N steps. Conceptually, this is a combination of shift and corr.
import pandas as pd
s = pd.Series([0.25, 0.5, 0.2, -0.05])
print(s.autocorr())
print(s请问可以直接training tmfile出来吗? 因为tengine-convert-tool covert 会有error
tengine-lite library version: 1.4-dev
Get input tensor failed

或是有例子能training出下面tmfile 呢?

You would then create your njit function and run it, and I believe the idea is that it prints debug information about whether