开放源码的无App推送服务,iOS14+扫码即用。亦支持快应用/iOS和Mac客户端、Android客户端、自制设备
-
Updated
Nov 10, 2023 - C
开放源码的无App推送服务,iOS14+扫码即用。亦支持快应用/iOS和Mac客户端、Android客户端、自制设备
Vector search for humans. Also available on cloud - cloud.marqo.ai
OpenMMLab Pre-training Toolbox and Benchmark
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
中文nlp解决方案(大模型、数据、模型、训练、推理)
Easily compute clip embeddings and build a clip retrieval system with them
Android UI 快速开发,专治原生控件各种不服
Vision-Language Models for Vision Tasks: A Survey
🥂 Gracefully face hCaptcha challenge with MoE(ONNX) embedded solution.
Must-have resource for anyone who wants to experiment with and build on the OpenAI Vision API 🔥
Search inside YouTube videos using natural language
Search photos on Unsplash using natural language
Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).
Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)
CLIP + FFT/DWT/RGB = text to image/video
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Add a description, image, and links to the clip topic page so that developers can more easily learn about it.
To associate your repository with the clip topic, visit your repo's landing page and select "manage topics."