Speaker
Description
This session will explore several powerful open-source CLI coding assistants in a hands-on 90-minute workshop. Learn or share your experience on how to integrate AI-assisted development into your research workflow while maintaining full control through open-source tools and custom LLM endpoints. Experiment with different agents, compare their strengths, and start using them immediately in your projects.
Space is limited to 8 participants, first come, first serve.
The landscape of AI-assisted coding has evolved dramatically, with exploding options of promises of powerful programming partners.
Open-source alternatives combined with institution-provided models offer transparency, flexibility, and cost-effectiveness. This workshop introduces some emerging, mature or production-ready (?) CLI coding assistants designed for coders who value flexibility and like terminal-based workflows.
Rather than passive lectures, this session emphasizes practical experimentation. You should bring your own code for exploration (or at least some ideas what to tackle from scratch).
By workshop's end, participants will understand how to integrate these open-source agents into their daily research workflows, customize them for specific languages and frameworks, and maintain full transparency over model interactions through open-source infrastructure.
Prerequisites:
- Your own laptop, obviously, with working eduroam Wifi, as we will need the network a lot.
- Operating system: Linux, Mac OS or Windows with WSL2 recommended, pure Windows depends on the specific assistant.
- You should be comfortable working in a terminal/command line. This workshop uses terminal-based tools exclusively to stay independent of specific IDEs.
- Optional: Container technology like Docker or Podman (recommended) preinstalled if you would like to encapsulate the assistants and their installation from your work environment and for added safety.
- Optional: Bring your own code.