Playing with local gemma2
I tinkered bit with Google’s new gemma2 model on my 32GB RAM M1 Pro. It seems so far quite useful, although I have dabbled with it only day or two so far. Here’s summary from some of the things I tested with it. Benchmarking Using the script from earlier iterations: for MODEL in gemma2:27b-instruct-q5_K_M gemma2:27b \ gemma2:9b-instruct-fp16 gemma2:9b-instruct-q8_0 gemma2 \ llama3:8b-instruct-q8_0 llama3 do echo ${MODEL}: ollama run $MODEL --verbose 'Why is sky blue?' 2>&1 \ | grep -E '^(load duration|eval rate)' echo done with the following models: ...