Tutorials & Tech

AI Content Creation & Tech Tutorials

Fun technology tutorials for a cyberpunk future. We share what we learn about AI content creation, open-source tools, and creative coding — because knowledge wants to be free.

AI Video Creation Animeify Images comma.ai / openpilot CS2 ROI Tracker Crypto On-Chain Coming Soon
Tutorial

AI Content Creation: Cinematic Cat Videos Without a Camera

Every video on our Instagram is made entirely with AI. No camera, no studio, no film crew. Just text prompts, generative video models, and a Python script that syncs everything to music. Here's how we do it.

AI generated content visualization

The Tools We Use

🎬

Kling AI 2.6

Our primary video generation model. Kling excels at cinematic camera movements, consistent character appearance across clips, and photorealistic output. We use it for hero shots and any scene that needs to look indistinguishable from real footage.

Video Gen Cinematic 5-10s clips
🎦

Runway Gen-4.5

Our go-to for stylized and artistic content. Runway handles abstract visuals, transitions, and creative effects better than any other model. We use it when we want a video to feel more like digital art than photography.

Video Gen Stylized Creative FX
🤖

Google Veo 3.1

Google's latest video model brings exceptional prompt adherence and smooth motion. We use Veo for scenes requiring precise actions (a cat turning its head, walking toward camera) where following the prompt exactly matters most.

Video Gen Precise Smooth Motion

The Workflow

Step 1: Script & Storyboard

Every video starts with a concept. We write a shot list describing each scene: the subject, camera angle, lighting, mood, and movement. This becomes the prompt sheet. A 60-second video typically needs 8-12 individual clips, each 4-8 seconds long.

Shot 01: "Close-up of a white Persian cat, neon green eyes, looking directly at camera, shallow depth of field, cinematic lighting, 4K"

Shot 02: "Slow dolly forward through a cyberpunk alley, rain, neon signs reflected in puddles, a white cat sitting on a crate, moody atmosphere"
🎬

Step 2: Generate Clips

We feed each prompt into the appropriate AI model. Kling for photorealistic hero shots, Runway for stylized sequences, Veo for precise movements. Each prompt usually generates 3-5 variations. We cherry-pick the best take for each shot, looking for visual consistency, motion quality, and adherence to the storyboard.

Pro tip: generating multiple variations is key. AI video models are non-deterministic — the same prompt produces different results each time. We typically generate 3-5 takes per shot and pick the best one. Think of it like film production: you wouldn't use the first take of every scene.

🎵

Step 3: Beat-Synced Editing with Python

This is where it gets technical. We wrote a Python script that analyzes a music track, detects beats using librosa, and automatically cuts between clips on beat drops. The script uses moviepy for video editing and handles crossfades, speed ramps, and color grading. The result: cinematic, music-synced videos that feel professionally edited.

# Simplified beat-sync workflow
import librosa
import moviepy.editor as mp

# Detect beats in the audio track
y, sr = librosa.load("track.mp3")
tempo, beats = librosa.beat.beat_track(y=y, sr=sr)
beat_times = librosa.frames_to_time(beats)

# Cut clips on each beat
for i, t in enumerate(beat_times):
  clip = clips[i % len(clips)]
  clip = clip.subclip(0, duration)
  timeline.append(clip)
🚀

Step 4: Polish & Publish

Final touches include color grading for visual consistency across clips, adding text overlays, adjusting audio levels, and exporting at the right resolution for each platform. Instagram Reels, TikTok, and YouTube Shorts all have different optimal specs. We batch-export from the Python pipeline to hit all platforms in one run.

Why Python over Premiere/DaVinci?

Repeatability. Once the script is written, creating a new video takes minutes instead of hours. Change the music track, swap in new clips, and the script handles all the timing automatically. It's the cyberpunk way — automate everything.

"We're not filmmakers. We're not video editors. We're cat breeders who learned Python and figured out how to make AI do the heavy lifting. If we can do it, you can too."

— Persian Punks
Tutorial

Animeify Images with Python

Turn any photo into anime-style art using PyTorch and AnimeGAN — a lightweight generative adversarial network that transforms real images into anime versions. No LLM required, no API calls, no subscription fees. Just open-source machine learning running on your own hardware.

Anime style digital art

How It Works

AnimeGAN is a generative adversarial network trained specifically on anime art styles. It learns the visual patterns of anime (bold outlines, flat colors, stylized shading) and applies them to real photographs. The model runs entirely locally — your images never leave your machine.

PyTorch GAN Open Source

What You Need

Python 3.8+, PyTorch, and a GPU with at least 4GB VRAM (CPU works too, just slower). The AnimeGAN model weights are freely available and only around 8MB. No cloud API, no tokens, no ongoing costs. Install the dependencies, download the weights, and you're running in under five minutes.

Python 3.8+ PyTorch GPU Optional

Try It Right Now

We built a browser-based demo so you can test AnimeGAN without installing anything. Upload a photo, pick an anime style, and see the result in seconds. It runs client-side in your browser — your images stay private.

Launch Animeify Demo
# Quick start — animeify an image
import torch
from PIL import Image
from torchvision import transforms
from model import Generator

# Load the pretrained AnimeGAN model
model = Generator()
model.load_state_dict(torch.load("animeGAN_weights.pth"))
model.eval()

# Transform and process your image
img = Image.open("my_cat.jpg")
tensor = transforms.ToTensor()(img).unsqueeze(0)
result = model(tensor)

# Save the anime version
output = transforms.ToPILImage()(result.squeeze(0))
output.save("my_cat_anime.jpg")
Open Source

comma.ai / openpilot

openpilot is an open-source driver assistance system developed by comma.ai. It provides adaptive cruise control and lane centering for over 300 supported car models — essentially giving your existing car semi-autonomous driving capabilities for a fraction of the cost of factory ADAS upgrades.

Car dashboard with driver assistance technology

What It Does

openpilot provides two core features: adaptive cruise control (maintains speed and following distance) and lane centering (keeps your car centered in the lane). On supported cars, it handles highway driving with minimal driver input. It uses a camera-first approach, similar to Tesla's vision system, but fully open source.

The Hardware

comma.ai sells the comma 3X, a dedicated device that mounts behind your rearview mirror and plugs into your car's OBD-II port. It runs openpilot, records driving data, and processes everything on-device. The entire system is designed to be installed in under 30 minutes with no permanent modifications to your car.

Why We Care

Open-source self-driving software is the most cyberpunk technology on the planet right now. A community of hackers and engineers building driver assistance that rivals billion-dollar OEM systems — and giving it away for free. We use openpilot daily on our drives across Texas, and it is genuinely impressive.

"The comma 3X is the best driving companion we own, and openpilot gets better with every update. It's open source, it's community-driven, and it turns a regular car into something that feels like the future."

— Persian Punks
Open Source Tool

CS2 Inventory ROI Tracker

A Python toolkit that scrapes your Counter-Strike 2 Steam inventory, pulls live market prices, fetches lifetime price history, and generates an interactive dashboard so you can see exactly what your skins are worth — and how that's changed over time. Open source, no subscriptions, runs locally.

Gaming setup with monitor displaying statistics

Inventory Scraping

Fetches your full CS2 inventory directly from Steam and pulls current Steam Market prices for every item. Data is cached locally as JSON so repeat runs are fast and don't hammer the API.

Python Steam API JSON Cache

Price History Charts

Retrieves lifetime price history from the Steam Market for each item and renders interactive line charts. See how a skin's value has trended over months — useful for spotting buy and sell windows.

Price History Charts Time Range

Interactive Dashboard

Generates a self-contained HTML dashboard with tabbed views, sortable columns, and filtering. No backend required — open the file in any browser. Also exports Markdown reports for quick reference.

Open Dashboard
# Clone and run cs-roi
git clone https://github.com/persian-punks/cs-roi
cd cs-roi
pip install -r requirements.txt

# Add your Steam ID to .env
STEAM_ID=your_steam_id_here

# Scrape inventory + generate dashboard
python steam_inventory_scraper.py
python steam_dashboard.py

# Open reports/steam_dashboard.html in your browser

"CS2 skins are genuinely volatile assets. We built this because we wanted a clear picture of what our inventory was actually worth — and whether holding or selling made sense. Now we know."

— Persian Punks
Deep Dive

On-Chain Analysis: Bitcoin vs Ethereum vs Cardano

On-chain data is the blockchain's heartbeat — raw, unfiltered signal from the network itself. Unlike price charts, on-chain metrics reveal what's actually happening: who's moving coins, how much supply is dormant, whether the market is overheated, how deeply ETH is embedded in DeFi. We built a live dashboard so you can see it all in one place.

Cryptocurrency blockchain visualization

Why On-Chain?

Price is what someone is willing to pay right now. On-chain data is what's actually happening. MVRV tells you if the market is overheated. Active address trends signal network growth or decline. Long-term holder supply shows conviction. It's the difference between the headline and the story.

MVRV Active Addresses NVT

BTC: Digital Gold Layer

Bitcoin's on-chain story is security and scarcity. Hash rate at all-time highs means the network has never been more secure. Long-term holder supply above 74% means most BTC hasn't moved in over a year — classic accumulation behavior. MVRV below 2 historically signals undervaluation.

Hash Rate HODL Waves MVRV

ETH: Programmable Money

Post-merge Ethereum burns ETH with every transaction via EIP-1559, making high-activity periods deflationary. ~28% of supply is now staked and earning yield. The DeFi TVL locked on-chain ($45B+) represents real economic activity that simply doesn't exist in Bitcoin's design space.

Burn Rate Staking DeFi TVL

What is MVRV?

Market Value to Realized Value. Realized value is the price each coin last moved on-chain, summed across the entire supply. MVRV below 1 means most holders are underwater — historically a bottom signal. Above 3.5 is historically overheated. Between 1–2 is the fair-to-undervalued range.

ADA: Stake Pool Nation

Cardano's on-chain story is participation. With ~65% of supply staked across ~3,000 pools, it has the highest staking ratio of the three and one of the most decentralized validator sets in proof-of-stake. The ecosystem is smaller but the architecture reflects a deliberate philosophy: broad participation over raw throughput.

Staking Decentralization

Data Sources

Live market data via CoinGecko API (price, volume, market cap, 7-day history). On-chain metrics sourced from Glassnode, IntoTheBlock, and Cardanoscan — approximate values updated periodically. Not financial advice.

CoinGecko Glassnode IntoTheBlock

"We don't trade. We're cat breeders. But we do hold crypto, we accept it for merch now, and we built this because we wanted to understand what we're actually holding. On-chain analysis is how you look past the noise."

— Persian Punks

Coming Soon

More tutorials and deep-dives in the pipeline. Follow us for updates.

💻

Local LLMs with Ollama

Run large language models locally on your own hardware. No cloud, no API keys, no data leaving your machine. We'll cover setup, model selection, and practical use cases.

Coming Soon
🎨

AI Image Generation Deep Dive

A comprehensive guide to creating consistent characters and scenes with Flux, Midjourney, and Stable Diffusion. Prompt engineering, LoRA training, and workflow automation.

Coming Soon
🔐

Privacy-First Tech Stack

The tools and services we use to stay private online. From ProtonMail to VPNs, encrypted messaging, and privacy-respecting alternatives to big tech services.

Coming Soon

Follow @persian.punks for More Tech Content

We share tutorials, AI experiments, behind-the-scenes content creation, and of course — Persian cats. The cyberpunk cat community is growing.

Follow on Instagram Get in Touch