Hacky Prompt Engineering
Using Python-Formatted Output to Constrain LLM Responses
Using Python-Formatted Output to Constrain LLM Responses
… and the multidimensional worlds our AI inhabit.
My work as a developer advocate leads me to meet several interesting people and ideas, and in the PyTorch community there is no dearth of either. Here I share a sampling of the content I create to make using PyTorch easier, or amplify inspirational voices in the community. PyTorch 101: Intro to PyTorch @ LFSummit Learn the basics Microsoft AI Show Twitter Microblogs: Low Numerical Precision in PyTorch Tensor Memory Formats Inference with Quantization Community Interviews: Refik Anadol creates mind-warping art with PyTorch Autodesk uses PyTorch to build a production-scale chatbot A fun PyTorch-based library to blend visual aesthetics Training and Inference at scale with Ray Qualcomm’s AIMET enables compressing PyTorch models for edge devices Quantum(!...
Build your GPT-powered search agent in ~100 lines of code
Why do anything yourself when AI can do it better?