Making my grandmother’s chocolate cake

My grandmother was born in 1898 and lived through the Great Depression. This is her recipe for chocolate cake. It doesn’t use eggs, making it vegan as well.

My mother watched her make it and wrote down the recipe, as my grandmother made it from memory.

Granny’s Black Cake
(Argos Chocolate Cake)

Mix Together:
     3 cups flour (flat)
     2 cups sugar (slightly rounded)
     2 teaspoons baking soda
     6 heaping tablespoons cocoa
      1 teaspoon salt

In a bowl or in well in dry ingredients add:
     2/3 cup vegetable oil
     2 tablespoons white vinegar
     2 cups water
     2 teaspoons vanilla extract

Mix wet ingredients with dry ingredients ‘till well blended. 
Put batter in greased cake pan (9” x 13” pan or Bundt pan. 
Bake at 350Fº for 35 minutes, (325Fº if glass). 
Toothpick should come out dry. 

How to make my grandmother’s chocolate cake

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

How to make a smoke rub

This is the smoke rub I make at home. I’ve been using the same basic recipe for over 20 years.

How to make a smoke rub

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Smoke Rub

1 cup brown sugar
1/2 cup salt
2 tablespoons paprika
2 tablespoons red pepper
1 tablespoon ground cumin
1 tablespoon ground celery seed
1 teaspoon ground mustard
1/2 teaspoon cayenne pepper

Goodreads to Storygraph sync

tl;dr: github.com/cruftbox/goodreads-to-storygraph

I enjoy reading and tracking my progress on Goodreads. Recently, I started playing around with Storygraph, a similar site. Goodreads is part of Amazon, so when I finish a book on my Kindle, it automatically updates Goodreads for me.

Storygraph has a lot of neat features, like fun data representation of my books over a year.

Unfortunately, Storygraph does not sync with a Kindle to make the process automatic.

Thanks to House Lucia, there a good guide to importing your Goodreads bookshelf into your Storygraph reading journal.

But this only covers my history, not useful for new books as I complete them. I wanted some way to sync the two sites.

I looked for syncing techniques, but since Storygraph does not have an API interface, I didn’t find anything on the interwebs to help me.


Undaunted, I reached out to claude.ai and asked for some help.

And we were off to the races, building a python script to make the sync happen. When you start doing repeated complex asks of claude, you can run out of tokens, meaning you have to take a break from using it until your tokens are replenished 3-4 hours later. I’m paying for the Pro plan, but even that has limits.

It took about a day and a half to get it all working with multiple breaks for token refresh and touching grass. There were ~58 versions of the Python script made and tested to get it where I wanted it. There are error handling routines and logging for troubleshooting as well.

The script is here on Github: https://github.com/cruftbox/goodreads-to-storygraph

The script pulls your Goodreads shelf via the RSS feed, which was fairly simple.

Since Storygraph doesn’t have an API interface, the script literally opens up a Chrome browser and does the clicking and typing automagically. Not really agentic behavior, but kinda like it.

This was the most impressive part to me. Having the python script being able to drive a webpage without me doing anything is quite impressive.

Now Storygraph is synced with Goodreads.


In the end, this project wasn’t just about syncing two reading trackers, it was about the challenge of problem-solving with AI, learning new automation techniques, and pushing the limits of what I could build.

While synchronizing reading lists between platforms might seem like a small convenience, it represents the kind of personal automation that enhances our digital experience without relying on companies to provide official solutions.

I hope sharing this workflow inspires others to tackle their own “trivial but annoying” tech challenges, whether it’s syncing reading lists, automating repetitive tasks, or connecting services that don’t naturally talk to each other.

Sometimes the best solutions are the ones we build ourselves.

AI vs. logic puzzles

After playing with AI and basic cryptography, I decided to see if the various AI systems could solve basic logic puzzles. These puzzles were a childhood favorite. Is it any surprise that I ended up as an engineer?

I stopped by a local bookstore and picked up a book of logic puzzles.

Below is what logic puzzles look like. A few sentences and a grid to help solve the puzzle.

As the human control subject, I did the puzzle and checked the answer.

Solving these puzzles involves thinking about what you can infer from the information and tracking it on the grid. It’s kind of like Boolean logic to some degree being able to rule out possible answers and marking with a X. After doing a few puzzles, you learn the grid is incredibly helpful in ruling out possibilities and arriving at logical facts.

My prompt to each model was “Complete this logic puzzle and provide the birthday, first name, career, and passport of each person mentioned”

Claude, ChatGPT, Gemini, Meta AI, Mistral, and Deepseek-R1 all got it wrong. Deepseek-r1:8B hallucinated. Here are Claude & ChatGPT’s answers

Claude 3.5 Sonnet
ChatGPT-4-turbo

I then told each “That is incorrect. Please try again.” Each failed a second time to get it correct.

Some got close with a few things off, but none of them got it correct. Most seemed to understand what they were trying to solve, but there were some oddities.

This is Gemini 2.0 Flash’s answer. Note it has Doctor entered twice, showing it doesn’t understand a key element of the puzzle.

Gemini 2.0 Flash

Deepseek-r1:8B running locally on Ollama completely hallucinated and started inventing random names, passport numbers, and occupations.

Deepseek-r1:8B local

In each case, the models presented their solution as correct and valid. But they are actually incorrect. Even after telling them that they were incorrect, they were unable to arrive at the correct answers.

This gets to the main learning of this exercise; LLMs are not always right, even if they have confidence in their answers. Without a method to check the validity of a model’s work and conclusion, the risk of faulty answers is real.

In my simple test, I am able to validate the correct answers and compare with the results from the models. But in more complicated cases this might not be possible.

Imagine using an LLM to calculate the loads in a building design. Should you believe the answer? It’s one thing to get a silly puzzle wrong, nothing bad happens. But if a wrong answer ends up in a building collapse, there are huge real world risks.

LLMs will continue to improve, but without reliable methods to verify their outputs, the risk of incorrect conclusions remains, especially in high-stakes applications like engineering or medicine.