Show HN: Pip install inference, open source computer vision deployment https://ift.tt/0XgWM9J

Show HN: Pip install inference, open source computer vision deployment Deploying vision models is time consuming and tedious. Setting up dependencies. Fixing conflicts. Configuring TRT acceleration. Flashing (and re-flashing) NVIDIA Jetsons. A streamlined, developer-friendly solution for inference is needed. We, the Roboflow team, have been hard at work open sourcing Inference, an open source vision deployment solution. Our solution is designed with developers in mind, offering a HTTP-based interface. Run models on your hardware without having to write architecture-specific inference code. Here's a demo showing how to go from a model to GPU inference on a video of a football game in ~10 minutes: https://www.youtube.com/watch?v=at-yuwIMiN4 Inference powers millions of daily API calls for global sports broadcasts, one of the world’s largest railways, a leading electric car manufacturer, and multiple other Fortune 500 companies, along with countless hackers’ hobby and research projects. Inference works in Docker and supports CPU (ARM and x86), NVIDIA GPU, and TRT. Inference manages dependencies and the environment. All you need to do is make HTTP requests to the server. YOLOv5, YOLOv8, YOLACT, CLIP, SAM, and other popular vision models are supported (some models need to be hosted on Roboflow first, see the docs; we're working on bring your own model weights!). Try it out and tell us what you think! https://ift.tt/vW4uzB0 August 23, 2023 at 04:34PM

Comments

Popular Posts