Learn why running large language model locally matters, and how LLMs actually work. We start with an overview of the technology and show how you can use the open-source tool Ollama to download and run a model on your laptop. From there, we customize the model and build AI applications you can use right away using NodeJS and Python. Get a solid foundation in AI skills that you will be able to use as soon as finish this session.
Resources:
Docker AI/ML collection – http://www.docker.com/blog/tag/artificial-intelligence-machine-learning/
LLM Everywhere: Docker for Local and Hugging Face Hosting – https://www.docker.com/blog/llm-docker-for-local-and-hugging-face-hosting/
New GenAI Stack: Streamlined AI/ML Integration Made Easy – https://www.docker.com/blog/introducing-a-new-genai-stack/
Get started with Docker – https://www.docker.com/get-started/
What are containers? https://www.docker.com/resources/what-container/
Try Docker Desktop https://www.docker.com/products/docker-desktop/
Docker 101 Tutorial https://www.docker.com/101-tutorial/
Join the conversation!
LinkedIn → https://dockr.ly/LinkedIn
Twitter → https://dockr.ly/Twitter
Facebook → https://dockr.ly/Facebook
Instagram → https://dockr.ly/Instagram
ABOUT DOCKER: Docker provides a suite of development tools, services, trusted content, and automations, used individually or together, to accelerate the delivery of secure applications.
#docker #ai #machinelearning