-
Updated
Mar 22, 2022 - Python
#
evaluation
Here are 689 public repositories matching this topic...
nlp
machine-learning
natural-language-processing
computer-vision
deep-learning
metrics
tensorflow
numpy
evaluation
speech
pandas
pytorch
datasets
-
Updated
May 8, 2021
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
compiler
functional-programming
book
lambda-calculus
evaluation
type-theory
type
pdf-book
type-checking
haskel
type-system
functional-language
hindley-milner
type-inference
intermediate-representation
-
Updated
Jan 11, 2021 - Haskell
Klipse is a JavaScript plugin for embedding interactive code snippets in tech blogs.
react
javascript
ruby
python
scheme
clojure
lua
clojurescript
reactjs
common-lisp
ocaml
brainfuck
evaluation
prolog
codemirror-editor
reasonml
interactive-snippets
code-evaluation
klipse-plugin
-
Updated
Jan 30, 2022 - HTML
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
audio
deep-learning
tensorflow
paper
end-to-end
evaluation
cnn
lstm
speech-recognition
rnn
automatic-speech-recognition
feature-vector
data-preprocessing
phonemes
timit-dataset
layer-normalization
rnn-encoder-decoder
chinese-speech-recognition
-
Updated
Feb 9, 2022 - Python
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
tracking
machine-learning
real-time
computer-vision
robotics
evaluation
evaluation-metrics
multi-object-tracking
kitti
3d-tracking
3d-multi-object-tracking
2d-mot-evaluation
3d-mot
3d-multi
kitti-3d
-
Updated
Mar 11, 2022 - Python
Multi-class confusion matrix library in Python
data-science
data
machine-learning
data-mining
statistics
ai
deep-learning
neural-network
matrix
evaluation
mathematics
ml
artificial-intelligence
statistical-analysis
classification
accuracy
data-analysis
deeplearning
confusion-matrix
multiclass-classification
-
Updated
Mar 17, 2022 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
nlp
natural-language-processing
meteor
machine-translation
dialogue
evaluation
dialog
rouge
natural-language-generation
nlg
cider
rouge-l
skip-thoughts
skip-thought-vectors
bleu-score
bleu
task-oriented-dialogue
-
Updated
Jan 13, 2022 - Python
Short and sweet LISP editing
-
Updated
Mar 10, 2022 - Emacs Lisp
XAI - An eXplainability toolbox for machine learning
machine-learning
ai
evaluation
ml
artificial-intelligence
upsampling
bias
interpretability
feature-importance
explainable-ai
explainable-ml
xai
imbalance
downsampling
explainability
bias-evaluation
machine-learning-explainability
xai-library
-
Updated
Oct 30, 2021 - Python
vlomonaco
commented
Feb 4, 2022
Discussed in ContinualAI/avalanche#900
Originally posted by sivomke January 30, 2022
Hi, everyone!
I have created a benchmark with dataset_benchmark that contains 3 experiences, and then have added a validation set to this benchmark with benchmark_with_validation_stream. I am trying to use Early St
bug
Something isn't working
good first issue
Good for newcomers
Evaluation
Related to the Evaluation module
FuzzBench - Fuzzer benchmarking as a service.
-
Updated
Mar 22, 2022 - Python
Python implementation of the IOU Tracker
tracker
python
detection
evaluation
demo-script
mot
detrac
iou-tracker
detrac-train
eb-detections
ua-detrac
tracking-by-detection
-
Updated
Feb 18, 2020 - Python
A General Toolbox for Identifying Object Detection Errors
-
Updated
Oct 15, 2021 - Python
TCExam is a CBA (Computer-Based Assessment) system (e-exam, CBT - Computer Based Testing) for universities, schools and companies, that enables educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams.
testing
school
university
evaluation
exam
cba
essay
computer-based-assessment
cbt
multiple-choice
mcsa
computer-based-testing
e-exam
tcexam
mcma
-
Updated
Oct 12, 2021 - PHP
Expression evaluation in golang
go
golang
parser
parsing
evaluation
godoc
expression-evaluator
expression-language
evaluate-expressions
gval
-
Updated
Dec 5, 2021 - Go
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
machine-learning
deep-learning
evaluation
labels
dataset
semantic-segmentation
semantic-scene-completion
large-scale-dataset
-
Updated
Sep 24, 2021 - Python
Case Recommender: A Flexible and Extensible Python Framework for Recommender Systems
python
algorithm
feedback
evaluation
batch
ranking
recommendation-system
top-k
recommender-systems
forte
rating-prediction
-
Updated
Nov 25, 2021 - Python
High-fidelity performance metrics for generative models in PyTorch
reproducible-research
metrics
evaluation
pytorch
gan
generative-model
reproducibility
precision
inception-score
frechet-inception-distance
kernel-inception-distance
perceptual-path-length
-
Updated
Nov 23, 2021 - Python
Visual Object Tracking (VOT) challenge evaluation toolkit
-
Updated
Apr 19, 2021 - MATLAB
C# Eval Expression | Evaluate, Compile, and Execute C# code and expression at runtime.
-
Updated
Mar 15, 2022 - C#
A collection of datasets that pair questions with SQL queries.
nlp
natural-language-processing
sql
database
neural-network
evaluation
dataset
dynet
natural-language-interface
-
Updated
Dec 29, 2020 - Python
An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
-
Updated
Jul 31, 2021 - C++
A Simple Math and Pseudo C# Expression Evaluator in One C# File. Can also execute small C# like scripts
parser
reflection
math
script
scripting
evaluation
fluid
mathematical-expressions-evaluator
expression
calculations
evaluator
mathematical-expressions
execute
expression-parser
eval
expression-evaluator
csharp-script
evaluate-expressions
evaluate
executescript
-
Updated
Feb 22, 2022 - C#
Simple Safe Sandboxed Extensible Expression Evaluator for Python
-
Updated
Mar 17, 2022 - Python
中文医疗信息处理基准CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
-
Updated
Aug 28, 2021 - Python
Improve this page
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."
Description
Currently, when a challenge link from EvalAI is shared users see a generic view of EvalAI homepage. We want the details specific to a challenge to be shown when a link is shared. Here's how it looks currently
Expected behavior:
T