Scoring AI Influence in Jekyll Posts with Local LLMs

There’s a moment that kind of sneaks up on you when you’ve been writing for a while, especially if you’ve started using AI tools regularly. You stop asking whether AI was used at all, and instead you start wondering how much it actually shaped what you’re reading. That shift is subtle, but once you notice it, you can’t really unsee it.

That’s exactly what led me down this path. I wasn’t interested in trying to “detect AI” in some absolute sense. That approach feels outdated pretty quickly, especially now that most content is some mix of human thinking and AI-assisted drafting. Instead, I wanted something more practical and honest — a way to measure the influence, not the origin.

What I ended up building was a scoring system that lives directly inside my Jekyll workflow. The idea is simple: analyze each post, look for patterns that tend to show up in AI-assisted writing, and assign a score that gives readers a sense of how much AI may have influenced the final result. It’s not about calling something out — it’s about adding transparency in a way that actually feels useful.

The Idea: Make AI Influence Visible

Once I stopped thinking about detection, the problem became a lot clearer. I didn’t need a binary answer telling me whether something was written by AI or not. That’s not how content works anymore. What I needed was a way to represent how it feels to read — specifically, how much it resembles patterns that tend to come from AI-assisted writing.

That led me to a much more practical approach: give every post a score that reflects how “AI-like” it feels. Not where it came from, but how it presents itself to the reader. That distinction matters more than anything else.

To get there, I focused on patterns instead of origins. Readers aren’t sitting there trying to reverse-engineer your writing process — they’re reacting to structure, tone, repetition, and flow. So that’s exactly what I decided to measure.

From a workflow standpoint, the system ended up being surprisingly straightforward. Each post gets analyzed, assigned a score, and that score gets written directly into the front matter. From there, Jekyll can render it however I want — a badge, a bar, a breakdown, whatever makes sense for the site.

The whole thing stays completely static. No plugins, no runtime processing, no extra moving parts. Just data generated ahead of time, baked into the post, and ready to use. That simplicity is what makes it work.

The Ranking Methodology

The scoring system is built on signals, not guesses.

Instead of a single black-box number, the model evaluates specific traits:

  • List density: how often structured lists appear
  • Repetition: repeated phrases or sentence patterns
  • Tone uniformity: overly consistent voice throughout
  • Structure regularity: predictable formatting and flow
  • Instructional style: content that reads like step-by-step output

Each of these contributes to a final score and supporting metadata.

The output includes:

  • ai_style_score: overall AI-likeness from 0 to 100
  • confidence: how confident the model is in that score
  • signals: breakdown of contributing factors
  • summary: a short explanation of why the score was assigned

This approach isn’t about being perfect. It’s about being consistent and explainable.

How This Maps to Jekyll Front Matter

Once the scoring is done, everything gets written directly into the post’s front matter.

That decision was intentional.

It keeps everything:

  • Static
  • Portable
  • Easy to render in Liquid
  • Compatible with GitHub Pages

Conceptually, the front matter looks like this:

ai_analysis:
  ai_style_score: 72
  confidence: 0.88
  signals:
    list_density: medium
    repetition: low
    tone_uniformity: high
    structure_regularity: high
    instructional_style: medium
  summary: "The content shows consistent tone and structured formatting with moderate instructional patterns, suggesting partial AI assistance."

Each field maps directly to the scoring model, which means your templates can visualize or surface the data however you want.

Step 1: Building analyze_post.py

The first version of this system was focused on a single file.

analyze_post.py is where all the core logic lives.

The purpose of the script is straightforward:

Take one Markdown file, analyze it, and return structured scoring data.

What the Script Actually Does

  • Reads the Markdown file from disk
  • Strips out existing front matter
  • Truncates content to avoid context overflow
  • Sends the content to a local LLM API
  • Forces a structured JSON response
  • Augments that response with deterministic signals

The API interaction itself is simple:

import requests
import json

def analyze_with_llm(content):
    prompt = """
You are analyzing a blog post for AI-style writing patterns.

Evaluate the following content and return ONLY valid JSON with this schema:
{
  "ai_style_score": int (0-100),
  "confidence": float (0-1),
  "signals": {
    "list_density": "low|medium|high",
    "repetition": "low|medium|high",
    "tone_uniformity": "low|medium|high",
    "structure_regularity": "low|medium|high",
    "instructional_style": "low|medium|high"
  },
  "summary": "short explanation"
}

Scoring guidance:
- High repetition, rigid structure, and uniform tone increase AI score
- Natural variation, uneven structure, and unique phrasing decrease AI score

Content:
\"\"\"{}\"\"\"
""".format(content[:8000])  # truncate to avoid context overflow

    response = requests.post(
        "http://127.0.0.1:11434/api/generate",
        json={
            "model": "llama3.1:latest",
            "prompt": prompt,
            "stream": False
        },
        timeout=60
    )

    result = response.json()

    # Ensure strict JSON parsing
    try:
        return json.loads(result.get("response", "{}"))
    except json.JSONDecodeError:
        return {
            "ai_style_score": 0,
            "confidence": 0.0,
            "signals": {},
            "summary": "Failed to parse model response"
        }

The important part isn’t the request — it’s the prompt.

Prompt Design and Signal Accuracy

This is where most of the work went.

The model is explicitly instructed to evaluate:

  • Repetition patterns
  • Structural predictability
  • Tone consistency
  • Instructional density
  • Formatting behavior

And it’s forced to respond in a strict schema.

That constraint is what makes the system reliable. Without it, the output becomes inconsistent and unusable for automation.

Why I Chose Local LLMs Over Cloud APIs

This ended up being one of the easiest decisions in the entire process. Running analysis on a few posts is no big deal, but once you scale that up to an entire blog, things change quickly. Costs start to matter, performance starts to matter, and the friction of relying on an external API becomes very real.

Cloud APIs bring a lot of overhead with them — you’re paying per token, dealing with scaling considerations, and introducing latency into something that ideally should feel instant while you’re iterating. That might be fine for occasional use, but it doesn’t hold up well when you’re running batch analysis or constantly tweaking prompts.

Switching to local LLMs removed all of that. There’s no per-request cost, no dependency on external services, and no waiting around for responses. More importantly, it gave me the freedom to iterate quickly. I could adjust prompts, rerun analysis, and refine the scoring model without thinking about usage limits or billing.

The surprising part was how close the results were. Once I dialed in the prompt, the outputs from a local model were more than good enough for this use case. At that point, it wasn’t really a trade-off anymore — it was just the more practical solution for the workload I was building.

	import requests
	import frontmatter
	import json
	import shutil
	import os
	import re
	import datetime
	
	OLLAMA_URL = "http://127.0.0.1:11434/api/generate"
	MODEL = "llama3.1:latest"
	
	
	def get_file_path():
		path = input("Enter full path to markdown file: ").strip()
	
		if not os.path.isfile(path):
			print("❌ File not found.")
			exit(1)
	
		if not path.endswith(".md"):
			print("❌ File must be a .md file.")
			exit(1)
	
		return path
	
	
	# -----------------------------
	# Deterministic Signal Detection
	# -----------------------------
	
	def detect_emoji_usage(text):
		emoji_pattern = re.compile(
		"[\U0001F300-\U0001FAFF]+", flags=re.UNICODE
		)
		return min(1.0, len(emoji_pattern.findall(text)) / 10.0)
	
	
	def detect_list_density(text):
		lines = text.split("\n")
		list_lines = sum(
			1 for l in lines if l.strip().startswith(("-", "*", "1.", "2.", "3."))
		)
		return min(1.0, list_lines / len(lines)) if lines else 0.0
	
	
	def detect_instructional_density(text):
		keywords = [
			"step", "steps", "first", "next", "then", "finally",
			"follow", "you can", "make sure", "ensure", "click",
			"open", "go to", "select"
		]
	
		lower = text.lower()
		count = sum(lower.count(k) for k in keywords)
	
		return min(1.0, count / 60.0)
	
	
	# -----------------------------
	# Temporal Intelligence
	# -----------------------------
	
	def extract_date_from_filename(path):
		filename = os.path.basename(path)
		match = re.match(r"(\d{4})-(\d{2})-(\d{2})", filename)
	
		if match:
			try:
				y, m, d = map(int, match.groups())
				return datetime.date(y, m, d)
			except ValueError:
				print(f"⚠️ Invalid date in filename: {filename}")
				return None
	
		return None
	
	
	def temporal_factor(date):
		if not date:
			return 1.0
	
		cutoff = datetime.date(2022, 11, 30)
		return 0.2 if date < cutoff else 1.0
	
	
	def classify_era(date):
		if not date:
			return "unknown"
	
		cutoff = datetime.date(2022, 11, 30)
		return "pre-ai" if date < cutoff else "ai-era"
	
	
	# -----------------------------
	# LLM Analysis
	# -----------------------------
	
	def analyze_with_ollama(content):
		content = content[:8000]
	
		prompt = f"""
	Return ONLY JSON.
	
		&#123;&#123;
		  "signals": &#123;&#123;
		"repetition": number,
		"tone_uniformity": number,
		"structure_regularity": number
	  }},
	  "summary": string
	}}
	
	Rules:
	- Conservative scoring
	- Do not assume AI authorship
	- Do not estimate emojis, lists, or instructions
	
	Content:
	---
	{content}
	---
	"""
	
		response = requests.post(
			OLLAMA_URL,
			json={
				"model": MODEL,
				"prompt": prompt,
				"stream": False,
				"options": {"temperature": 0}
			}
		)
	
		if response.status_code != 200:
			print("❌ Ollama request failed")
			print(response.text)
			return None
	
		raw = response.json().get("response", "").strip()
	
		print("\n--- RAW MODEL OUTPUT ---\n")
		print(raw)
		print("\n------------------------\n")
	
		try:
			return json.loads(raw)
		except:
			cleaned = raw.replace("```json", "").replace("```", "").strip()
			try:
				return json.loads(cleaned)
			except:
				print("❌ JSON parsing failed.")
				return None
	
	
	# -----------------------------
	# Scoring
	# -----------------------------
	
	def clamp(v):
		return max(0.0, min(1.0, float(v)))
	
	
	def soften(v):
		return max(0.0, min(1.0, round(v * 0.65, 2)))
	
	
	def compute_score(signals, date):
		weights = {
			"list_density": 0.35,
			"instructional_density": 0.25,
			"repetition": 0.15,
			"tone_uniformity": 0.1,
			"structure_regularity": 0.1,
			"emoji_usage": 0.05
		}
	
		base = sum(signals.get(k, 0) * weights[k] for k in weights)
	
		if date and date < datetime.date(2022, 11, 30):
			base *= 0.2
	
		return round(base, 2)
	
	
	def validate_and_merge(result, content, date):
		try:
			signals = result.get("signals", {})
	
			# LLM signals
			for k in signals:
				signals[k] = soften(clamp(signals[k]))
	
			# Deterministic signals
			signals["emoji_usage"] = detect_emoji_usage(content)
			signals["list_density"] = detect_list_density(content)
			signals["instructional_density"] = detect_instructional_density(content)
	
			score = compute_score(signals, date)
	
			return {
				"ai_style_score": score,
				"confidence": "medium",
				"era": classify_era(date),
				"signals": signals,
				"summary": result.get("summary", "")
			}
	
		except Exception as e:
			print("❌ Validation failed:", e)
			return None
	
	
	# -----------------------------
	# File Handling
	# -----------------------------
	
	def backup_file(path):
		backup_path = path + ".bak"
		shutil.copy(path, backup_path)
		print(f"🗂 Backup created: {backup_path}")
	
	
	def update_front_matter(path, analysis):
		post = frontmatter.load(path)
		post["ai_analysis"] = analysis
	
		with open(path, "w") as f:
			f.write(frontmatter.dumps(post))
	
		print("✅ Front matter updated.")
	
	
	# -----------------------------
	# Main
	# -----------------------------
	
	def main():
		path = get_file_path()
	
		backup_file(path)
	
		post = frontmatter.load(path)
	
		if not post.content.strip():
			print("❌ No content found.")
			return
	
		date = extract_date_from_filename(path)
	
		print(f"📅 Detected date: {date}")
		print("🧠 Analyzing content...\n")
	
		result = analyze_with_ollama(post.content)
	
		if not result:
			print("❌ Analysis failed.")
			return
	
		final = validate_and_merge(result, post.content, date)
	
		if not final:
			print("❌ Validation failed.")
			return
	
		print("\n📊 Final Analysis:\n")
		print(json.dumps(final, indent=2))
	
		confirm = input("\nWrite to front matter? (y/n): ").strip().lower()
	
		if confirm == "y":
			update_front_matter(path, final)
		else:
			print("❌ Aborted.")
	
	
	if __name__ == "__main__":
		main()

Using analyze_post.py

At its core, analyze_post.py is designed to be simple to run and easy to integrate into your existing workflow. It takes a single Markdown file, analyzes its content using a local LLM, and returns structured scoring data that can be written directly into your front matter.

To run the script against a post, you simply call it from the command line and pass in the path to the Markdown file you want to analyze.

python3 analyze_post.py path/to/your/post.md

When executed, the script will read the file, strip out any existing front matter, and process only the content body. It automatically handles truncation to stay within model limits, so you don’t need to worry about excessively long posts breaking the analysis.

The script then sends the content to your local LLM endpoint and expects a strictly formatted JSON response. That response is parsed, validated, and enriched with additional deterministic signals before being returned.

python3 script.py _posts/2026-03-29-Automating\ JAMF\ Pro\ Email\ Notifications\ with\ SendGrid\ \(Smart\ Group\ Driven\ Workflows\).md

📄 Processing single file

🧠 Processing: 2026-03-29-Automating JAMF Pro Email Notifications with SendGrid (Smart Group Driven Workflows).md

📝 Modern device management isn't just about enforcing policies—it's about communicating effectively with users at the right time. In JAMF Pro, Smart Groups give you powerful visibility into device state, but they don't natively solve the problem of proactive, automated user communication. Whether you're trying to prompt users to restart their machines, complete updates, or take action on compliance issues, bridging that gap requires a flexible and scalable notification system.

✅ Saved

✅ Done
jon@Mac-Studio Desktop % python3 analyze_post.py _posts/2026-04-01-scoring-ai-influence-jekyll-posts-local-llms.md 
Enter full path to markdown file: _posts/2026-04-01-scoring-ai-influence-jekyll-posts-local-llms.md 
🗂 Backup created: 2026-04-01-scoring-ai-influence-jekyll-posts-local-llms.md.bak
📅 Detected date: 2026-04-01
🧠 Analyzing content...


--- RAW MODEL OUTPUT ---

```
{
  "signals": {
    "repetition": "high",
    "tone_uniformity": "medium",
    "structure_regularity": "low"
  },
  "summary": "The content shows consistent tone and structured formatting with moderate instructional patterns, suggesting partial AI assistance."
}
```

------------------------

There are a few key behaviors built into the script that are worth noting:

  • It enforces a strict response schema to keep outputs predictable
  • It gracefully handles malformed or incomplete model responses
  • It augments model output with additional signal detection where needed
  • It is designed to be composable, making it easy to plug into larger workflows

In practice, this means you can use analyze_post.py as a standalone tool for inspecting individual posts, or as a building block for batch processing, CI pipelines, or content validation workflows.

It’s intentionally minimal, but that’s what makes it flexible. Once you understand how to call it and what it returns, you can shape it to fit just about any content analysis use case.

Step 2: Scaling with analyze_batch.py

Once analyze_post.py was working reliably, the next issue became obvious almost immediately: I wasn’t dealing with a single post, I was dealing with an entire site. Running the analysis one file at a time wasn’t practical, so the next step was to scale the workflow. That’s where analyze_batch.py came in. Instead of replacing the original script, it builds on top of it, wrapping the single-post analyzer and applying it across a directory of Markdown files so the entire site can be processed in one pass.

What the Batch Script Adds

  • Iterates over all posts in a directory
  • Handles date-based grouping of content
  • Supports dry-run mode for testing
  • Aggregates structured results
  • Handles edge cases gracefully

The core loop looks like this:

# Batch processing loop (from analyze_batch.py)

import os
import glob

def process_directory(directory, dry_run=True):
    files = glob.glob(os.path.join(directory, "*.md"))

    results = []

    for path in files:
        print(f"\n📄 Processing: {path}")

        try:
            # Call single-file analyzer
            analysis = analyze_post(path)

            if not analysis:
                print("❌ Skipping due to failed analysis.")
                continue

            results.append({
                "path": path,
                "analysis": analysis
            })

            if dry_run:
                print("🧪 Dry run enabled — not writing changes.")
            else:
                write_to_front_matter(path, analysis)

        except Exception as e:
            print(f"⚠️ Error processing {path}: {e}")
            continue

    return results

One of the more useful additions to the batch script was the ability to group posts by time period, which added an entirely new layer of context to the analysis. Instead of looking at scores in isolation, I could compare how content evolved over time — from older posts written before AI tools were widely used, to transitional content, and then to more recent posts where AI assistance is more common. Seeing those shifts side by side made the scoring far more meaningful, because it provided a baseline for understanding what “normal” looked like before and after AI became part of the writing process.

Using analyze_batch.py

Once the single-post workflow is in place, analyze_batch.py is what allows you to scale that process across your entire site. Instead of manually running analysis on individual files, this script walks a directory of Markdown posts and applies the same logic in a consistent, repeatable way.

At a basic level, you run the script from the command line and point it at the directory containing your posts.

python3 analyze_batch.py /path/to/your/_posts

By default, the script is designed to be safe. It supports a dry-run mode, which means it will perform the full analysis and print results without writing anything back to your files. This is useful when you’re tuning prompts or validating output before committing changes.

python3 analyze_batch.py /path/to/your/_posts --dry-run

When you’re ready to apply changes, you can disable dry-run mode. At that point, the script will begin writing the computed analysis directly into each file’s front matter.

There are a few important behaviors built into the script that make it practical to use at scale. It iterates through all Markdown files in the target directory, gracefully skips files that fail analysis, and continues processing without stopping the entire run. Results are aggregated so you can review them holistically, rather than file by file.

Because it builds on top of the single-post analyzer, you get the same scoring consistency, just applied across your entire content set. That makes it useful not just for one-time analysis, but for ongoing workflows like content audits, historical comparisons, or even CI-based validation.

import requests
import frontmatter
import json
import os
import re
import datetime
import random

OLLAMA_URL = "http://127.0.0.1:11434/api/generate"
MODEL = "llama3.1:latest"

POSTS_DIR = input("Enter path to _posts directory: ").strip()
SAMPLE_SIZE = int(input("How many posts to sample? (e.g. 10): ").strip())


# -----------------------------
# Helpers
# -----------------------------

def extract_date_from_filename(path):
    filename = os.path.basename(path)
    match = re.match(r"(\d{4})-(\d{2})-(\d{2})", filename)

    if match:
        try:
            y, m, d = map(int, match.groups())
            return datetime.date(y, m, d)
        except ValueError:
            print(f"⚠️ Invalid date in filename: {filename}")
            return None

    return None


def classify_bucket(date):
    if not date:
        return "unknown"

    if date < datetime.date(2022, 11, 30):
        return "pre-ai"
    elif date < datetime.date(2024, 1, 1):
        return "early-ai"
    else:
        return "recent-ai"


# -----------------------------
# Deterministic Signals
# -----------------------------

def detect_emoji_usage(text):
    emoji_pattern = re.compile(
    "[\U0001F300-\U0001FAFF]+", flags=re.UNICODE
	)
    return min(1.0, len(emoji_pattern.findall(text)) / 10.0)


def detect_list_density(text):
    lines = text.split("\n")
    list_lines = sum(
        1 for l in lines if l.strip().startswith(("-", "*", "1.", "2.", "3."))
    )
    return min(1.0, list_lines / len(lines)) if lines else 0.0


def detect_instructional_density(text):
    keywords = [
        "step", "steps", "first", "next", "then", "finally",
        "follow", "you can", "make sure", "ensure", "click",
        "open", "go to", "select"
    ]

    lower = text.lower()
    count = sum(lower.count(k) for k in keywords)

    return min(1.0, count / 60.0)


# -----------------------------
# LLM Call
# -----------------------------

def analyze(content):
    content = content[:8000]

    prompt = f"""
Return ONLY JSON.

&#123;&#123;
  "signals": &#123;&#123;
    "repetition": number,
    "tone_uniformity": number,
    "structure_regularity": number
  }}
}}

Rules:
- Conservative scoring
- Do not assume AI authorship
- Do not estimate emojis, lists, or instructions

Content:
---
{content}
---
"""

    r = requests.post(
        OLLAMA_URL,
        json={
            "model": MODEL,
            "prompt": prompt,
            "stream": False,
            "options": {"temperature": 0}
        }
    )

    if r.status_code != 200:
        return None

    raw = r.json().get("response", "").strip()

    try:
        return json.loads(raw)
    except:
        try:
            cleaned = raw.replace("```json", "").replace("```", "").strip()
            return json.loads(cleaned)
        except:
            return None


# -----------------------------
# Scoring
# -----------------------------

def soften(v):
    return max(0.0, min(1.0, v * 0.65))


def compute_score(signals, date):
    weights = {
        "list_density": 0.35,            # ↑ strong signal
        "instructional_density": 0.25,   # ↑ new strong signal
        "repetition": 0.15,
        "tone_uniformity": 0.1,
        "structure_regularity": 0.1,
        "emoji_usage": 0.05
    }

    base = sum(signals.get(k, 0) * weights[k] for k in weights)

    # Temporal adjustment
    if date and date < datetime.date(2022, 11, 30):
        base *= 0.2

    return round(base, 2)


# -----------------------------
# Sampling
# -----------------------------

def collect_posts():
    files = [
        os.path.join(POSTS_DIR, f)
        for f in os.listdir(POSTS_DIR)
        if f.endswith(".md")
    ]

    buckets = {
        "pre-ai": [],
        "early-ai": [],
        "recent-ai": []
    }

    for f in files:
        d = extract_date_from_filename(f)
        bucket = classify_bucket(d)
        if bucket in buckets:
            buckets[bucket].append(f)

    return buckets


def sample_posts(buckets, total):
    per_bucket = max(1, total // 3)

    sample = []
    for bucket in buckets:
        if buckets[bucket]:
            sample += random.sample(
                buckets[bucket],
                min(per_bucket, len(buckets[bucket]))
            )

    return sample


# -----------------------------
# Main
# -----------------------------

def main():
    buckets = collect_posts()
    sample = sample_posts(buckets, SAMPLE_SIZE)

    print("\n📦 Sample selected:\n")
    for f in sample:
        print(f)

    print("\n🧠 Running analysis...\n")

    results = []

    for path in sample:
        post = frontmatter.load(path)
        content = post.content.strip()

        if not content:
            continue

        date = extract_date_from_filename(path)
        bucket = classify_bucket(date)

        llm = analyze(content)

        if not llm:
            print(f"⚠️ Failed: {path}")
            continue

        signals = llm.get("signals", {})

        # soften LLM signals
        for k in signals:
            signals[k] = soften(signals[k])

        # deterministic signals
        signals["emoji_usage"] = detect_emoji_usage(content)
        signals["list_density"] = detect_list_density(content)
        signals["instructional_density"] = detect_instructional_density(content)

        score = compute_score(signals, date)

        result = {
            "file": os.path.basename(path),
            "date": str(date),
            "bucket": bucket,
            "score": score,
            "signals": signals
        }

        results.append(result)

        print(f"\n--- {result['file']} ---")
        print(json.dumps(result, indent=2))

    # -----------------------------
    # Summary
    # -----------------------------
    print("\n📊 SUMMARY\n")

    for bucket in ["pre-ai", "early-ai", "recent-ai"]:
        bucket_scores = [r["score"] for r in results if r["bucket"] == bucket]

        if bucket_scores:
            avg = round(sum(bucket_scores) / len(bucket_scores), 2)
            print(f"{bucket}: avg={avg} ({len(bucket_scores)} posts)")


if __name__ == "__main__":
    main()

The Bigger Takeaway: Local LLM APIs in Python

The real value in all of this isn’t just the ability to score blog posts, it’s realizing how approachable it is to build practical workflows around local LLMs. Once you step back and look at what’s actually happening, the pattern is surprisingly simple. You prepare some input, send it to a local model over HTTP, enforce a structured response, and then post-process the result into something useful. That’s really the entire loop.

What makes this powerful is how broadly that pattern applies. It’s not limited to blog content or AI scoring — the same approach works for reviewing documentation, enriching datasets, analyzing internal knowledge bases, or building lightweight tooling for your own workflows. Anywhere you have text and want structured insight, this model fits naturally.

Once you get comfortable working this way, it starts to change how you think about automation. You stop looking at LLMs as standalone tools and start seeing them as components you can wire into systems. And when that clicks, you begin to spot opportunities for this kind of workflow almost everywhere.

Final Thoughts

This started as a simple curiosity, but it quickly evolved into a system that’s now just part of how I publish content; not to judge or label anything as AI or human, but to better understand how it was shaped and to make that understanding visible to anyone reading, which in the end is what actually matters.

Ready to take your Apple IT skills and consulting career to the next level?

I’m opening up free mentorship slots to help you navigate certifications, real-world challenges, and starting your own independent consulting business.
Let’s connect and grow together — Sign up here

AI Usage Transparency Report

AI Era · Written during widespread use of AI tools

AI Signal Composition

List Instr Emoji
Repetition: 0%
Tone: 0%
Structure: 0%
List: 5%
Instructional: 92%
Emoji: 100%

Score: 0.3 · Moderate AI Influence

Summary

The content shows consistent tone and structured formatting with moderate instructional patterns, suggesting partial AI assistance.

Related Posts

Running Image Generation Locally on macOS with Draw Things (2026)

Local LLMs have rapidly evolved beyond text and are now capable of producing high-quality images directly on-device. For users running Apple Silicon machines—especially M-series Mac Studios and MacBook Pros—this represents a major shift in what’s possible without relying on cloud services. Just a few years ago, image generation required powerful remote GPUs, subscriptions, and long processing times. Today, thanks to optimized models and Apple’s Metal acceleration, you can generate and edit images locally with impressive speed and quality. The result is a workflow that is faster, private, and entirely under...

Read more

Setting up Ollama on macOS

Recently, after some bad experiences with OpenAI's ChatGPT and CODEX, I decided to look into and learn more about running local AI models. On its face it was intimidating, but I had seen a lot of people in the MacAdmins community posting examples of macOS setups, which really helped lower the bar for me both in terms of approachability and just making me more aware of the local AI community that exists out there today.

Read more

AI Agent Constraints and Security

I really feel like in this era of AI it's essential to write about and share experiences for others who are leveraging AI, especially now that AI usage seems almost ubiquitous. Specifically, when it comes to AI in development and the rapid growth of AI-driven automations in the IT landscape, I believe there's a need for open discussion and exploration.

Read more

Vibe Coding with Codex: From Fun to Frustration

So there I was, a typically day, a typical weekend. As a ChatGPT customer, I had heard good things about Codex and had not yet tried the platform. To date my experience with agentic coding was simply snippit based support with ChatGPT and Gemeni where I would ask questions, get explanations and support with squashing bugs in a few apps that I work on, for fun, on the side. There were a few core features in one of the apps I built that I wanted to try implementing but the...

Read more

Automating Script Versioning, Releases, and ChatGPT Integration with GitHub Actions

Managing and maintaining a growing collection of scripts in a GitHub repository can quickly become cumbersome without automation. Whether you're writing bash scripts for JAMF deployments, maintenance tasks, or DevOps workflows, it's critical to keep things well-documented, consistently versioned, and easy to track over time. This includes ensuring that changes are properly recorded, dependencies are up-to-date, and the overall structure remains organized.

Read more

Automating JAMF Pro Email Notifications with SendGrid (Smart Group Driven Workflows)

Modern device management isn't just about enforcing policies—it's about communicating effectively with users at the right time. In JAMF Pro, Smart Groups give you powerful visibility into device state, but they don't natively solve the problem of proactive, automated user communication. Whether you're trying to prompt users to restart their machines, complete updates, or take action on compliance issues, bridging that gap requires a flexible and scalable notification system.

Read more

Cleaning House in Jamf Pro: A Friendly Auditor Script for Real-World Hygiene

There’s a tipping point in every Jamf Pro environment where the policy list begins to feel like a junk drawer. Everyone means well. Nobody deletes anything. And then, months later, you’re trying to answer simple questions like: *Which policies are actually scoped? What’s no longer referenced? Why are there five versions of the same script?* This post covers a small, practical script I wrote to help you **see** what’s stale, **explain** why it’s stale, and (optionally) **park** it safely out of the way—without deleting a thing.

Read more

Turn Jamf Compliance Output into Real Audit Evidence

Most teams use Apple’s macOS Security Compliance Project (mSCP) baselines because they scale and they’re repeatable. Jamf’s tooling makes deployment straightforward and the Extension Attribute (EA) output is a convenient place to capture drift. What you don’t automatically get is the artifact an auditor will accept on a specific date—an actual document you can file that shows which endpoints are failing which items, plus a concise roll-up of failure counts you can act on. Smart Groups answer scope; they don’t produce evidence.

Read more

The Power of Scripting App Updates Without Deploying Packages

Keeping macOS environments up-to-date in a seamless, efficient, and low-maintenance way has always been a challenge for IT admins. Traditional package deployment workflows can be time-consuming, prone to versioning issues, and require extensive testing and repackaging. This can lead to frustration and wasted resources as IT teams struggle to keep pace with the latest updates and patches. But there's another way—a more elegant, nimble approach: scripting.

Read more