Post-trained Kimi K2, a 1 trillion parameter model, with a LoRA on hundreds of thousands of tweets to defeat AI detectors. Comes in two variants: jokegen2-1t-r1 (reasoning) and jokegen2-1t-sft (direct).
Built with rubric-based RL — the model learns to generate tweets that pass as human-written.
