niznik-dev
Are you niznik-dev? Claim your skills.
niznik-dev / analyze-experiment
Generates interactive visualizations from experiment evaluations using inspect-viz, enhancing data analysis and reporting.
niznik-dev / design-experiment
Assists in planning LLM fine-tuning and evaluation experiments, guiding users through a structured YAML configuration process.
niznik-dev / scaffold-experiment
Automates the setup of experimental infrastructure for machine learning, orchestrating fine-tuning and evaluation configurations efficiently.
niznik-dev / run-experiment
Facilitates the orchestration of machine learning experiments, ensuring model optimization and evaluation are executed sequentially.
niznik-dev / summarize-experiment
Summarizes experiment results by extracting key metrics like training loss and accuracy from completed experiments.
niznik-dev / analyze-to-pdf
Converts experiment analysis reports from markdown to PDF using pandoc, ensuring shareable and well-formatted documents.
niznik-dev / create-inspect-task
Guides users in creating custom evaluation tasks for inspect-ai through an interactive workflow, ensuring best practices and documentation.
niznik-dev / create-meeting-agenda
Facilitates the creation of weekly software meeting agendas by automating the process of copying and updating previous agendas in a wiki repository.