Goodreads to Storygraph sync

tl;dr: github.com/cruftbox/goodreads-to-storygraph

I enjoy reading and tracking my progress on Goodreads. Recently, I started playing around with Storygraph, a similar site. Goodreads is part of Amazon, so when I finish a book on my Kindle, it automatically updates Goodreads for me.

Storygraph has a lot of neat features, like fun data representation of my books over a year.

Unfortunately, Storygraph does not sync with a Kindle to make the process automatic.

Thanks to House Lucia, there a good guide to importing your Goodreads bookshelf into your Storygraph reading journal.

But this only covers my history, not useful for new books as I complete them. I wanted some way to sync the two sites.

I looked for syncing techniques, but since Storygraph does not have an API interface, I didn’t find anything on the interwebs to help me.


Undaunted, I reached out to claude.ai and asked for some help.

And we were off to the races, building a python script to make the sync happen. When you start doing repeated complex asks of claude, you can run out of tokens, meaning you have to take a break from using it until your tokens are replenished 3-4 hours later. I’m paying for the Pro plan, but even that has limits.

It took about a day and a half to get it all working with multiple breaks for token refresh and touching grass. There were ~58 versions of the Python script made and tested to get it where I wanted it. There are error handling routines and logging for troubleshooting as well.

The script is here on Github: https://github.com/cruftbox/goodreads-to-storygraph

The script pulls your Goodreads shelf via the RSS feed, which was fairly simple.

Since Storygraph doesn’t have an API interface, the script literally opens up a Chrome browser and does the clicking and typing automagically. Not really agentic behavior, but kinda like it.

This was the most impressive part to me. Having the python script being able to drive a webpage without me doing anything is quite impressive.

Now Storygraph is synced with Goodreads.


In the end, this project wasn’t just about syncing two reading trackers, it was about the challenge of problem-solving with AI, learning new automation techniques, and pushing the limits of what I could build.

While synchronizing reading lists between platforms might seem like a small convenience, it represents the kind of personal automation that enhances our digital experience without relying on companies to provide official solutions.

I hope sharing this workflow inspires others to tackle their own “trivial but annoying” tech challenges, whether it’s syncing reading lists, automating repetitive tasks, or connecting services that don’t naturally talk to each other.

Sometimes the best solutions are the ones we build ourselves.

AI vs. logic puzzles

After playing with AI and basic cryptography, I decided to see if the various AI systems could solve basic logic puzzles. These puzzles were a childhood favorite. Is it any surprise that I ended up as an engineer?

I stopped by a local bookstore and picked up a book of logic puzzles.

Below is what logic puzzles look like. A few sentences and a grid to help solve the puzzle.

As the human control subject, I did the puzzle and checked the answer.

Solving these puzzles involves thinking about what you can infer from the information and tracking it on the grid. It’s kind of like Boolean logic to some degree being able to rule out possible answers and marking with a X. After doing a few puzzles, you learn the grid is incredibly helpful in ruling out possibilities and arriving at logical facts.

My prompt to each model was “Complete this logic puzzle and provide the birthday, first name, career, and passport of each person mentioned”

Claude, ChatGPT, Gemini, Meta AI, Mistral, and Deepseek-R1 all got it wrong. Deepseek-r1:8B hallucinated. Here are Claude & ChatGPT’s answers

Claude 3.5 Sonnet
ChatGPT-4-turbo

I then told each “That is incorrect. Please try again.” Each failed a second time to get it correct.

Some got close with a few things off, but none of them got it correct. Most seemed to understand what they were trying to solve, but there were some oddities.

This is Gemini 2.0 Flash’s answer. Note it has Doctor entered twice, showing it doesn’t understand a key element of the puzzle.

Gemini 2.0 Flash

Deepseek-r1:8B running locally on Ollama completely hallucinated and started inventing random names, passport numbers, and occupations.

Deepseek-r1:8B local

In each case, the models presented their solution as correct and valid. But they are actually incorrect. Even after telling them that they were incorrect, they were unable to arrive at the correct answers.

This gets to the main learning of this exercise; LLMs are not always right, even if they have confidence in their answers. Without a method to check the validity of a model’s work and conclusion, the risk of faulty answers is real.

In my simple test, I am able to validate the correct answers and compare with the results from the models. But in more complicated cases this might not be possible.

Imagine using an LLM to calculate the loads in a building design. Should you believe the answer? It’s one thing to get a silly puzzle wrong, nothing bad happens. But if a wrong answer ends up in a building collapse, there are huge real world risks.

LLMs will continue to improve, but without reliable methods to verify their outputs, the risk of incorrect conclusions remains, especially in high-stakes applications like engineering or medicine.

Consumer-grade AI and decoding simple ciphers

I was watching a TV show where one of the plot devices was decoding an encrypted message. As with most TV shows, the solution was silly complicated. But it brought up memories of my childhood.

Believe it or not, there are magazines for people who like to do logic puzzles and simple cryptography. They still exist today. I would spend a lot of time counting letters and looking for patterns in ciphertext to decode them. Yes, probably not how you spent your childhood, but it’s how I spent part of mine. I found the logic puzzles especially calming.

I started wondering if the current round of generally available AI systems would be able to solve simple encryption. I’m sure that the NSA and other spook agencies have amazingly tuned AI/ML to crack encryption, but my thoughts were about systems that the average person has access to.

I went to cryptii.com where I could do a lot of simple encryption with a variety of methods. The simplest kind of cipher is known as a Caesar cipher, which is where you shift the letters in the alphabet by a set number. For example of a shift of 4, a becomes e, b becomes f, etc. Claude got this right in short order. I decided to compare the various AI to see their capabilities.

I tried the most basic alphabetic substitution cipher, where a becomes z, b becomes y, etc.

Only Claude got it correct. ChatGPT, Mistral, and Meta AI hallucinated, Gemini & Deepseek (online) paraphrased the answer, but did not do a direct translate. I ran Deepseek:8B (locally) with llama and it could not complete it.

ChatGPT, Mistral, and Meta AI all told me they had solved it, but were way off.

The interesting thing is while Gemini got it wrong, it got some of the basics of the original quote right in the end.

“This is my simple religion. There is no need for temples; no need for complicated philosophy. Our own brain, our own heart is our temple; the philosophy is kindness.” is the original plaintext quote.

Deepseek R1 had similar results of kinda getting the gist of the plaintext.

My friend Leonard tried it on some of the higher power LLMs that he has access to.

let’s see o3-mini-high gets it. o3-mini does as well. o1 does. obviously o1-pro has no problem w/ it. the most surprising thing: my 4o does as well – it wrote a python script to do it (i have custom instructions that tell it to use python for math which probably encourages it to code, default version might not jump directly to write a python script to do it).

Seems like the issue comes from some sort of tokenization issue, where the LLM has to make the leap to tokenize the letters themselves instead of the ciphertext, decode the message and then do the output tokenization so it matches the actual decrypted text.

Look at the plaintext phrase of “our own heart is our temple; the philosophy is kindness.“, which Gemini transforms into “our own heart is the decoder; the protocols are kindness.” and Deepseeks transforms to “our own heart is our secret; the cautionary tale is kindness.

It’s almost like it understands the meaning, but uses synonym words to rephrase it (kinda).

The most interesting part is that it looks like the models have been able to do the actual decryption, but have trouble making the output match exactly to fit the plaintext on the output tokenization.

I’m not sure of how the input tokenization is done on the models. I’d expect they would need to do it at the character level, not the word or subword levels. For output it looks like word tokenization, hence the synonym type answers.

I tried again with a more complicated alphabetical substitution cipher, where the letter substitutions are randomized.

In this case, only Claude got the decryption correct and even recognized the quote.

ChatGPT and Deepseek:8B both failed to complete it.

Gemini, Meta, and Mistral all came up with gibberish that they assured me was the right decryption.

Deepseek thought it was a bible quote.

Leonard helped test the alphabetical substitution cipher with more powerful models.

Deepseek-V3 came back with an incorrect passage, as did Deepseek-R1 which simply gave up.

ChatGPT o3-mini-high (Reasoned for 3m 2s) got caught up on thinking it was a Tolkein passage.

ChatGPT o1-Pro didn’t get it right either, but was heading down the same track that a human would try with letter frequency and common small words. But also got caught trying to match one it’s known quotes. It convinced itself that it was a quote by Francis Bacon, which is incorrect.

ChatGPT 4o went “on a wild chase writing lots of Python analysis code but gets nowhere, even with hints. It just can’t do it no matter how hard it tries” and failed as well.

It appears that most of the commonly available LLMs have a lot of difficulty with kinds of tasks that involves the actual letters in prompts as opposed to words, like the strawberry issue.

Obviously, cryptography-focused AI models would be specifically trained to handle ciphertext and leverage brute-force computing to solve it.

The big caveat here is that I’m a rank amateur with AI/LLMs and this is all just me playing around with some tools, not serious work.

If you want to see what serious people are doing, check out Leonard or Simon, they are fantastic people.

A non-coder’s take on AI coding 

After hearing opinions of AI, ranging from the end of humanity to the Singularity, I decided to dive in a bit deeper.  I’ve led projects that worked on using AI/machine learning to analyze video at work, I hadn’t used it on any serious personal effort before. 

I’m not a software coder. In the early days of personal computing, I did know some Assembly, Basic, Fortran, and even Pascal. But I’ve never made software development a skill of mine.

I’ve worked in technology my entire life and am comfortable configuring and working the guts of software that I’m dealing with, but I’ve never written any real software. While I have managed petabytes of storage, large technical operations, and dealt with the nitty gritty of networking, writing C, javascript, or python are beyond me.

After hearing from much more savvy friends that AI tools do amazing stuff when helping to code software, I decided to give it a try.

Python Scripts

The first task was cleaning up a very old personal web site.  The site goes back 20+ years.  I wanted to move it to a new server and clean up a bunch of problems.  First and foremost, most of the HTML code used http: instead of https: making all modern browsers unhappy.  You’re probably thinking, “Well, that’s just a simple python script.” And you’re right, but I don’t code python.

I started with ChatGPT to get started.

ChatGPT not only wrote the Python script but also guided me through installing Python and troubleshooting errors. In minutes, thousands of files were fixed—an impressive feat for someone with no coding background.

As I looked at the HTML to confirm code had changed, I saw other there were other issues.  I had included email addresses, web stat trackers, and many other artifacts of the early web.  ChatGPT was able to write more scripts that found and fixed all of these things.  In the end, it wrote a general security scanner that looked for many common issues and surfaced them. 

The process was very much like having a knowledgeable friend sitting over my shoulder, patiently walking me through the process. 

After this experience, I started trying several different AI systems to help with technical questions.  The one that I seemed to like the most was claude.ai.  Clear and to the point, claude almost always did the work I needed rather than telling me what work was needed. 

Instead of telling me to generically “check the logs for errors”, claude said “go to the logs that are located in /var/log and upload it to me for review”.  And sure enough it would parse the logs and give me specifics on what to try.

Most importantly, claude could help me with one of the most difficult issues in computing.

WordPress Plugin

I had recently returned my weblog to operation thanks to my friend Greg doing the needed necromancy. 

Seeing the linkblogs on sites I liked, I wanted one for my site.

We (claude and me) quickly iterated on the functionality and design.  I didn’t want to use JSON for input. I wanted a manual entry process in the WordPress admin page and didn’t want it to run using jQuery.

It took minutes to get it working. Again, my mind was blown.

Making changes was straightforward and simple.

Within an hour, the linkblog was running and looked the way I wanted.  Even if I was a WordPress shorttcode expert, I don’t think I could have done this all in an hour. 

Best of all, I can keep modifying the code as needed without having to call in favors from friends.

Speech Interface

After watching a few Youtube videos of people speaking to an AI and hearing a response like a conversation, I was intrigued.  Could I build something like that?

The HTML code for the needed Javascript popped up and we were off to the races.   Claude walked me through setting up an API key, getting tokens, and working through errors.

The process quickly got complicated as I needed to set up a kind of reverse proxy for the API calls to claude that meant changing the settings of the Apache web server and installing a node.js server.  I was walked through this process and the testing of it easily.

At each point, claude was positive and explained what was needed to be done. It never sighed, rolled it’s eyes, or got exasperated at me.

After about and hour, it was running an I was able to ask important questions.

Improvements were easy, making a better UI design and even creating a button to stop the audio when it got long winded.

For some this may not be impressive, but to me it was mind blowing.  A range of possibilities opened up in my mind.  No longer stuck with googling for someone else’s work on the web that might fit my needs, I am actually able to build what I dream up.

I’ve played with a few different things to scratch my own itches and I continue to see what many others see in the use of AI in software development.

What does it all mean?

Fuck if I know. 

I just looked a small slice of what AI tools might be capable of in coding.  In that area, it seems clear that use of AI tools by software developers will be a game changing addition that allows more to be done in a shorter amount of time.  I don’t see it eliminating the need for humans, just as pneumatic nailguns didn’t eliminate the need for human carpenters.  Just powerful tools to let humans get things done.

I don’t see AI diagnosing networking problems and be able to go room to room examining fiber jackfields or replacing QSFPs.  Or AI being able to read a room of execs and their body language when presenting ideas.  Or AI being able to stop people from having bad video conference etiquette.

But the AI tools are currently a huge boon to software development and troubleshooting.  I know many people that want nothing to do with AI and have strong feelings about it’s use.  I get that.  But to reject it’s use in this area is like sticking with a typewriter instead of trying a word processor back in the 1980s.  A quixotic quest to maintain the status quo when the world is moving forward quickly.

Dad advice

I made a short video of dad-type advice I’d collected over the years.

Wasn’t feeling great when I made it and should have brought more energy to it.

Dad Advice from a Certified Dad

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.