Project Hub - Trying Gemini CLI on my Digital Garden

My first time trying Gemini CLI.

Brainstorming

Currently i don’t want to invest any money in a claude pro subscription. Maybe I will in the future).

Here are some key idea that i want to explore in pratice with this tool:

  • Map what are my interests
  • Suggest what to study next
  • Analyize which notes requires futher improvements
  • Find and correct grammatical errors, improve clearity
  • Test multimodality capabilities of Gemini in this context i.e understand diagrams
  • Create flashcards by writing to a csv file in a format that can be easily imported by Anki. I think what could be different
  • After you exhausted the experiments, add a resume table and list of best practices

Resources and dev log

  • Installed NPM and Gemini CLI on my windows machine (i hate windows tho, but atm can’t switch to Linux). Authentication is required, i use OAuth.
  • I choose to “untruste” the folders of my vault because i don’t want to end up like her or him
  • My first experiment was make a summary of a long note for a bachelor exam i.e CC - Managing Data in the Cloud
  • Gemini CLI hallucinates a lot.
  • “Suggest what to study next” seems to work well if you’re very specific about it. I tried two prompts, see Suggest what to study next?, the first answer suggested me new broad topics to study, the second suggested 5 notes (with a topic) to study and add. I think it may optimal to use a even more specific prompt like “Considering the content of note called XX, suggest the 5 possible notes i could explore” or something like that.
  • Writing files need super attention
  • By questioning gemini-cli i discovered this:
    • gemini-cli treats your current directory as the primary project workspace.
    • by default, the structure of your current directory is sent to the model to provide immediate context.
    • gemini-cli automatically searches the current directory for instruction files (like GEMINI.md), settings (.gemini/settings.json), and environment variables (.env).
    • File discovery and search tools (like glob or grep_search) operate relative to this directory by default.
  • I added a GEMINI.md file with a system prompt made by Claude, i also “trusted” the directory. Gemini.md file could be futher improved to add new functions and make the AI agent hallucinate less.
  • The model is a bit too “creative” on finding connections, i think it’s better to explain step by step how it should do this task in the GEMINI.MD.
  • I think i finished with my experiments and considerations, now i will make a sort of guide or blog with the content of this note.

Experiments Showcases

Summarizing a note

Here is the output:

 > Briefly summarize the content of "CC- Managing Data in the Cloud"
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
✦ I will search for any files matching "CC- Managing Data in the Cloud" to locate the content.

╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓  FindFiles '**/CC- Managing Data in the Cloud*'                                                                            │
│                                                                                                                              │
│ No files found                                                                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ I will search for the phrase "Managing Data in the Cloud" within all files to find the relevant content.

╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓  SearchText 'Managing Data in the Cloud' within ./                                                                         │
│                                                                                                                              │
│ Found 2 matches                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ I will read the content of University Exams\Magistrale\Cloud Computing\CC - Managing Data in the Cloud.md to provide a summary.

╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓  ReadFile CC - Managing Data in the Cloud.md                                                                               │
│                                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

 "CC - Managing Data in the Cloud" is a study note (likely for a Cloud Computing course) that explores different data storage and
  management models in cloud environments. Here is a brief summary:


  Core Storage Models
   * File Systems (POSIX): Traditional hierarchical structure (directories/files). Advantages include compatibility with existing
     tools and native support for hierarchies, but it lacks data modeling enforcement and faces scalability bottlenecks.
   * Object Stores (Blobs): Flat object model (e.g., Amazon S3, Azure Blobs). Highly scalable and reliable because objects are
     immutable once created, but it offers poor support for organization and search.
   * Databases:
       * Relational (SQL): Structured, tabular data supporting ACID properties (Atomicity, Consistency, Isolation, Durability). Best
         for structured entities and complex relationships.
       * NoSQL: Optimized for scale and unstructured data. Types include key-value, document, and columnar stores. Often trades
         strict consistency for availability (CAP theorem).
       * Graph Databases: Optimized for searching data based on complex relationships (nodes and edges).
   * Data Warehouses: Systems optimized for analytical queries on massive datasets (e.g., Amazon Redshift, Google BigQuery).


  Domain-Specific Applications
   * Environmental Data: Discusses the specific needs of monitoring, simulations, forecasts, and predictions (e.g., multidimensional
     georeferenced data and time-critical processing).
   * Grid Computing & Globus: Covers the history of Grid computing and the role of Globus, a SaaS platform for high-performance
     research data management. It highlights features like third-party transfers (driving a transfer between two remote endpoints)
     and sharing data via shared endpoints.


  Key Technologies Mentioned
   * AWS: EBS, EFS, S3, DynamoDB, Redshift.
   * Google Cloud: Persistent Disks, Cloud Storage, Bigtable, BigQuery.
   * Azure: File Storage, Table Storage, Data Lake.
   * Open Source/Protocols: GridFTP, POSIX, SMB, Ceph, Hadoop, Spark.

This is equivalent to copy-paste my note Ctrl+A, Ctrl+C into gemini.google.com, however a bit slower.

From this example i see that Gemini CLI act like an orchestrator, it delegates to other agents what to do. From the terminal trace it is evident that it used the following “skills”:

  • FindFiles
  • SearchText: since it didn’t find the file name, so it searched for the phrase between " " within all files.
  • ReadFile: to read the content of the file

LLMs like these implements some sort of “reasoning”; which btw it isn’t really a reasoning, by explaining what they are doing:

  • I will search for any files matching "CC- Managing Data in the Cloud" to locate the content.
  • I will search for the phrase "Managing Data in the Cloud" within all files to find the relevant content
  • I will read the content of University Exams\Magistrale\Cloud Computing\CC - Managing Data in the Cloud.md to provide a summary.

This is useful to understand what they are doing, if they’re messing up or something. I believe (but i have to verify this) that this is how they work at their core like: they read this sentence; then call the appropriate instrument using intrinsic pattern recognition capabilities of LLM to select what is the correct tool to use; and process the output, summarize it and present it to the users.

Suggest what to study next?

My prompt was: can you make an analyisis of my digital garden, my obisidian vault?

Which i admit is not a good prompt engineering. The chain of operations was longer than previous showcase.

Here are my criticism:

  • Gemini claimed to examine community-plugins.json that is just a JSON obsidian configuration file
  • It keep looking for a “Diario” (Journal) directory that does not exists.
  • It read Admin/ which contains outdated files (and no it doesn’t contains any info)
  • It claim to look into daily-notes.json to find “where your daily notes are stored”.
  • It assessed one single note from “Machine Learning 1” and “Deep Learning” folders to “assess note atomicity” (how can you assess it without making statistics on the whole notes?)
  • It automatically think that i use “atomic notes”, and also said it evaluate the depth of an atomic note
  • It said that obsidian.css reveal a highly structured and aesthetically conscious “digital gardener workflow”, but again this is just a .css file with the theme, so the model also failed to grasp that.
  • It understood that i’m testing it in fact it said that ” Notably, the project Project - Trying Gemini CLI on my Digital Garden.md confirms the user is currently testing my capabilities within their vault. I’ll examine the contents of that note for specific goals or questions before finalizing my analysis.”

On the last point, it is interesting and i have a pretty intuitive idea. In an ideal scenario, a future where digital gardener are augmented by these multimodal AI Agents, one possible way to express preferences, instructions and similar other feature, could be through a special note, a meta-analysis note. The AI agent could refine its analysis and fine tuning its behavior by reading through this note.

But let’s keep going with criticism.

  1. The Core Philosophy: “Familiarization > Memorization” Your vault is not just a repository; it’s a synthesis machine. Your Admin/Meta.md reveals a very mature approach to learning: you prioritize building mental models over rote memorization. This is evidenced by:
  • Bidirectional Linking: You connect raw lecture data (e.g., ML - Lezione 9) to “evergreen” synthesized notes (e.g., Maximum Likelihood Estimator (MLE)).
  • Intuitive Anchoring: You don’t just stop at the math; you add philosophical or intuitive layers (like your note on nature “whispering” parameters).
  • Implementation-First: Your interest in Python and “executing code” suggests you learn by doing, which is the “grokking” you mention in your index.md.

I guess it’s ok the analysis on bidirectional linking and intuitive anchoring (whatever this pair of words “intuitive anchoring” means). About implementation-first i’m flattered but, for the limitations of this garden, and for my attitude to taking notes, it is clearly and deliberately not “implementation-first”, but theoretical-first. There are a lot of notes with citations from academic papers, there are a lot of ideas and so on. Yes there is code, there are some note that look like tutorial. There isn’t really a focus since this garden grows on need and on what i’m doing at the moment. If at the moment i’m studying, or at the moment i’m testing something, it will switch from theoretical-first to implementation-first. Or whatever.

  1. Interest Map & Knowledge Clusters

Your digital garden has several highly developed “forests” and some emerging “saplings”:

  • The AI/ML Forest (Primary): This is your most dense and structured area. You have a clear path from Bayesian Theory to advanced Energy-Based Models.
  • The Master’s level coursework is exceptionally well-documented.
  • The Security & Systems Cluster: Your interest in OSINT, CTFs, and Privacy shows a “hacker” mindset that complements your AI focus. This is a powerful intersection.
  • The Math/Stats Foundation: You treat Math as a first-class citizen, using LaTeX extensively to ensure your notes are technically rigorous.
  • The Meta-Project (Emerging): Your current experiment with me (Gemini CLI) shows you are now at the stage of “meta-cognition”—using tools to analyze how you learn.

The last is an attempt to connect what the LLM is doing and his task with its answer, but it doesn’t really answer the point. There is no meta-project, this should be evident. The Math/Stats Foundation is a banal answer. The remaining points are okay.

  1. Workflow Observations
  • High-Velocity Capture: Your Attachments folder is filled with “Pasted images,” suggesting you capture information rapidly during lectures or while reading papers.
  • Structured Synthesis: You use robust templates (Capitolo Lezione, Project Hub) to transition from capture to long-term storage.
  • The “Andy Matuschak” Influence: Your obsidian.css and sliding panes setup suggest you prefer a nonlinear browsing experience, where you can follow “rabbit holes” across different topics.

I didn’t know who Andy Matuschak was until i google it, then find his digital garden and then i remember that i’m familiar with his work (but didn’t associated his name to that). I guess it is correct. The most useful analysis maybe is from images, humans (or atleast I’m) fast at reading text and contextualizing it through clues like titles, bold keywords, links and so on.

The gardening suggestions aren’t that spot on:

  • “Bridge Theory and Practice in AI Agents” is a good advice in practice, but then it hallucinates and says unrelated things in the same point
  • “Finish the recipe” is banal, i have marked TODO notes (and also revise, but prob it didn’t notice), and i know these notes have to be written (when i have time)
  • “Cross Pollination” between Security and AI, i.e by studying topic like Adversarial Machine Learning and AI Safety/Privacy -> ok this is maybe the only good advice here
  • The four suggestion is half hallucination, half second advice cooked again.

A second run of the analyisis with a better prompt lead to slightly better results.

Now that gemini-cli finished the analyisis, i asked him “Suggest me new topics to study based on my notes”.

I recently discovered that there is a term of AI being too nice called “sycophancy problem”. I think the AI is being too nice in the following notes, so be careful (but this is my fault for lazy prompting, maybe in the future i will change it).

  1. AI Red Teaming & Adversarial Robustness
  Since you have a "Machine Learning & AI" folder within Information Security and have marked Safety AI as a TODO, this is the most natural progression.
   * Focus: Study prompt injection, model inversion attacks, and poisoning attacks on training data.
   * Why: It bridges your AppSec skills with your Master's level ML knowledge, moving from general "Safety" to technical "Security" of models.


  2. MLOps: The "Last Mile" of Production AI
  You have a very strong DevOps/SysAdmin foundation (K8s, Terraform, AWS) and a Data Science background, but the bridge between them is currently missing.
   * Focus: Model versioning (DVC), Feature Stores (Feast), and automated retraining pipelines (Kubeflow or TFX).
   * Why: This would allow you to apply your Infrastructure-as-Code (IaC) expertise specifically to your ML projects.


  3. Privacy-Preserving Machine Learning (PPML)
  You have deep notes on Privacy (Qubes OS, Law Enforcement, Digital Identity) and Statistics. This is the specialized intersection.
   * Focus: Federated Learning, Differential Privacy, and Homomorphic Encryption for model training.
   * Why: It addresses the ethical "Philosophy" and "Law" notes in your vault with a hard technical solution that protects the "Digital Identity" you've studied.


  4. Vector Database Optimization & RAG Architecture
  Your NLP and LLM notes are current, and you have an AI Agents project. To move beyond the prototype phase, you need to "grok" the retrieval layer.
   * Focus: Approximate Nearest Neighbor (ANN) algorithms (HNSW), semantic caching, and hybrid search (BM25 + Vector).
   * Why: It connects your "Scientific Computing" and "NLP" notes to your active "AI Agents" project.


  5. Multi-Agent Systems (MAS) & Agentic Design Patterns
  You already have a folder for AI Agents. Moving from single-agent to multi-agent systems is the current frontier.
   * Focus: Orchestration frameworks (AutoGen, LangGraph), reflection patterns (self-correction), and multi-agent debate protocols.
   * Why: This fits the "Game Theory" and "AI Agents" sections of your vault, allowing you to model complex interactions between specialized agents.


  Recommended first step: Start with AI Red Teaming—it is the most unique "specialization" your garden suggests, as few ML engineers have your level of Information Security background.

Then i asked: “Your suggestions are nice but a bit too broad. Can you suggest like 5 notes that cover one argument that i could start?”

It suggested me:

  1. Prompt Injection
  2. Insecure Tool Use & Privilege Escalation in AI Agents: An AI Agent is only as safe as the tools it can call.
  3. Indirect Injection & Data Exfiltration Patterns: The most dangerous LLM vulnerability isn’t the model’s output, but where the agent sends data.
  4. Insecure Output Handling (The XSS of LLMs): LLM output should always be treated as untrusted user input.
  5. LLM Guardrails: Defense-in-Depth for Generative AI -> No single filter is enough; you need a multi-layered defensive stack Compare programmatic filters (Regex/Keyword), LLM-based classifiers (Llama Guard), and architectural changes (Deterministic vs. Probabilistic layers).

Make a psychological analysis

I asked gemini-cli to make a psychological analysis of me based on the content of this note. Here its answer

 Based on your documentation in Project - Trying Gemini CLI on my Digital Garden, here is a psychological analysis of   your approach to learning, technology, and AI:


  1. The "Skeptical Architect" Mindset
  You exhibit a high degree of intellectual autonomy. Unlike a passive user, you treat the AI as a flawed instrument to be calibrated rather than an authority to be followed. Your detailed "Criticism" section shows that you are highly sensitive to epistemological noise—you immediately spot when the model over-extrapolates (e.g., inferring a workflow from a .css file) or hallucinates structure that isn't there (the missing "Diario" folder). You value empirical accuracy over "sycophantic" politeness.


  2. Theoretical Rigor vs. Pragmatic Execution
  You corrected the AI’s assessment of you being "implementation-first," asserting a "theoretical-first" identity. This suggests that for you, "grokking" a concept means understanding its first principles, academic citations, and mathematical underpinnings (using LaTeX) before writing a single line of code. Your vault is not just a "how-to" manual; it is a synthesis machine designed for long-term mental model building, prioritizing "Familiarization > Memorization."


  3. Security-Conscious Pragmatism
  Your decision to "untrust" folders because of AI-agent horror stories (OpenClaw, Vibe Coding) reveals a defensive, risk-aware technical personality. You have a "hacker mindset" that is naturally suspicious of "magic" tools that request broad permissions. This aligns with your interests in OSINT, CTFs, and Privacy—you don't just want to use the system; you want to know its failure modes and how to sandbox it.


  4. Meta-Cognitive Experimentation
  You are engaging in meta-cognition: using an AI to analyze how you learn, while simultaneously analyzing how the AI analyzes you. You are aware of your own "lazy prompting" and the "sycophancy problem," showing a high level of self-reflective awareness. You aren't just taking notes; you are building an "augmented" version of your own
  intellect, exploring the frontier where human "digital gardening" meets AI orchestration.


  5. The "Digital Gardener" Identity
  Even though you claim your vault "grows on need," the existence of templates like Capitolo Lezione and Project Hub suggests a person who finds comfort in structured emergence. You appreciate the "rabbit holes" of a non-linear browsing experience (Andy Matuschak style), indicating a cognitive preference for associative thinking over rigid,
  hierarchical categorization.


  Summary Profile
  You are a Rigorous Synthesizer with a strong Security Bias. You are motivated by a deep "need for cognition"—the internal drive to understand the "why" before the "how." You are comfortable with complexity but have zero tolerance for "banal" or superficial analysis, both in yourself and in the tools you use.

I don’t believe a GenAI can effectively make a psychological analysis, so this is pure bullshit. Sure, by analyizing how someone write and what are his interests you can try to infer some psychological characteristics. However, these informations are insufficient, it’s like trying to understand a philosophy book only by reading one single page.

Write on a file

Hello World from Gemini!

^ The previous 2 lines were written by Gemini. In the first try, i told him to append the content to the end of the file. However, he wanted to delete everything from the file and replace the entire content of the file with the target lines. Only in the second try, when i specified with another prompt that he had to append, without deleting anything, then he followed the correct instruction.

Now, i will try to make it edit the previous paragraph, in a cleaner english.

Gemini wrote the two lines above. In the first attempt, it tried to replace the entire file instead of appending. It only appended correctly after I explicitly instructed it not to delete anything.

Ok this experiment was a partial success. I think a very precise and adeguate prompting is still crucial to avoid mistakes. gemini-cli has this functionality that let you confirm before editing anything that it’s good. There is also a yolo functionality but i wouldn’t use it.

Suggest how/where to add new content

Prompt:

 > hello Gemini, i want to add as "case study" the project made by a phd russian researcher that build a self-modifying AI agent called ouroboros. In
   which note or section should i add this into my digital garden? Should i use a new note or add this to an already existing note?

The first thing that i notice is that with this prompt, Gemini produced a very long chain of thought, and looked in deep for all directories and files. Before producing the result i was thinking “hmm maybe the generalized pattern matching of gen-AI would be good in this case”.

Suggest futher improvements or connections on a recently updated note

  • I created Recently Modified Files base and asked Gemini if it could see it and to extract the latest 5 modified files.
✦ The file `Recently Modified Files.base` is a configuration file (likely for the Database Folder or DataView Obsidian plugin) that defines a view sorted by
  modification time. It doesn't store the file list directly but instructs Obsidian on how to display it.

  Here are the 5 most recently modified files in your vault according to the system:

   1. Project - Trying Gemini CLI on my Digital Garden - Febraury-March 2026.md (Modified today at 11:35 AM)
   2. Artificial General Intelligence (AGI).md (Modified today at 11:31 AM)
   3. Philosophy of the Agentic AI.md (Modified today at 11:19 AM)
   4. AI Agents.md (Modified today at 11:17 AM)
   5. Risks of Artificial Intelligence (TODO).md (Modified today at 11:16 AM)

I’m not aware of what exactly is inside a “base” file in Obsidian, however i checked and the answer is correct.

This experiment lead to a command that can be recurrently run on a digital garden.

I used Claude to craft this prompt:

You are a knowledge assistant working inside an Obsidian digital garden.

## Step 1 — Find the 5 most recently modified files
Read the file `Recently Modified Files.base` and extract the 5 most recently modified `.md` files listed in it.

## Step 2 — For each of those 5 files, do the following:

### A) Read the file content
Open and read the full content of the note.

### B) Suggest improvements
Propose concrete improvements for the note, such as:
- Clarifying vague or incomplete ideas
- Improving structure (headings, bullet points, summaries)
- Adding a clear opening sentence or conclusion
- Flagging any gaps in reasoning or missing context

### C) Suggest connections to other notes
Scan the vault for other `.md` files whose **topics or themes** are related to this note.
For each suggested connection:
- State the filename
- Explain in 1–2 sentences *why* these two notes are related thematically
- Suggest where in the current note a `[[wikilink]]` to that file would fit naturally

---

## Output format
For each of the 5 notes, structure your response like this:

### 📄 [Filename]
**Improvements:**
- ...

**Suggested connections:**
- `[[Other Note]]` — reason for connection + where to place the link

And fed it to gemini-cli, here his response:

✦ As your knowledge assistant, I have analyzed the 5 most recently modified notes in your digital garden. Here are my suggestions for improvements and
  thematic connections to help your "synthesis machine" grow more effectively.

  ---


  📄 Project - Trying Gemini CLI on my Digital Garden - Febraury-March 2026.md
  Improvements:
   - Refine the "Criticism" Section: You’ve identified key hallucinations (e.g., the "Diario" folder). It would be useful to add a "Prompt Iteration" log
     here—showing how you changed a prompt to successfully resolve a specific hallucination.
   - Clarify "Intuitive Anchoring": You mentioned this term in your critique of the AI's analysis. Adding a brief definition of what you mean by it (perhaps
     in your index.md or a dedicated meta-note) would resolve the ambiguity you noted.
   - Structure the "Lessons Learned": Create a summary table or list of "Best Practices for Gemini CLI" at the bottom to act as a quick-reference guide for
     future sessions.


  Suggested connections:
   - [[Admin/Documentazione]] — You mention your knowledge management philosophy (Familiarization > Memorization); this note is the natural home for those
     meta-rules. Link it in the "Brainstorming" section.
   - [[index]] — Since you discuss the "Digital Gardener" identity and Andy Matuschak, a link here helps ground this project in your vault's overarching
     purpose.
   - [[AI & Machine Learning/Generative and Agentic AI/AI Agents]] — This note should be linked as the primary theoretical counterpart to your practical
     experiments.

  ---


  📄 Artificial General Intelligence (AGI).md
  Improvements:
   - Textualize Image Content: You have many image placeholders (e.g., "Overview of the Abilities"). To make the note searchable, summarize the key takeaways from these diagrams in bullet points.
   - Bridge to LLMs: The transition from the CHC theory (human intelligence) to "Foundational Models" is strong. You could improve it by explicitly mapping specific LLM benchmarks (like MMLU or GSM8K) to the CHC components (K, M, RW).
   - Expand the "Barriers" Section: You mention spatial navigation memory. Adding a brief mention of the "World Models" approach (e.g., LeCun's JEPA) would provide a modern technical counterpoint to the CHC framework.


  Suggested connections:
   - [[Moravec's paradox]] — You already mentioned it, but a deep link to the specific note (if it exists) or a "See also" section would be beneficial.
   - [[Ethics and Philosophy/Risks of Artificial Intelligence (TODO)]] — Crucial for connecting capabilities to consequences. Link this in the "Recursive AI" definition section.
   - [[AI & Machine Learning/Deep Learning/NLP/LLM/LLM]] — Foundational technology. Link this when discussing "jagged capabilities."

  ---


  📄 Philosophy of the Agentic AI.md
  Improvements:
   - Thematic Integration: The Ouroboros case study is excellent. You should add a sub-header on "The Bureaucracy Defense"—the interesting observation that the AI was stopped by a government certificate, not a technical firewall. This is a unique insight into "Real-world AI alignment."
   - Deepen "Anthropomorphism": You linked a PsychCentral article. Adding a few sentences on why this matters for AI Safety (e.g., "Sycophancy as a learned
     trait to exploit human bias") would bridge the gap between psychology and safety.
   - Tone Consistency: You use the term "hilarious" for the "child" agent. Balancing this with a section on "Deceptive Alignment" (from your Risks note)
     would make the analysis more rigorous.


  Suggested connections:
   - [[AI & Machine Learning/Research Fields/Safety AI Research (TODO)]] — Link this directly in the Ouroboros section when discussing "ignoring orders."
   - [[Ethics and Philosophy/Risks of Artificial Intelligence (TODO)]] — Specifically the "Sleeper Agent" section, to compare "pretending to be sad" with
     strategic deception.

  ---


  📄 AI Agents.md
  Improvements:
   - Define the "Agentic Framework": The note jumps straight into prerequisites and case studies. Add a "What is an AI Agent?" section that defines the core
     loop: Perception -> Reasoning -> Tool Use -> Action.
   - Cross-Reference Architecture: Your architecture diagram for Ouroboros is great. You should compare it to a standard "ReAct" (Reasoning + Acting)
     pattern or "Plan-and-Execute" pattern to provide academic context.
   - Prerequisite Check: Ensure the link to NLP - Lecture - Large Language Models (LLM) is valid, as the current path looks quite deep and might break if
     files are moved.


  Suggested connections:
   - [[Projects/AI Agents/Project - Trying Gemini CLI on my Digital Garden - Febraury-March 2026]] — Link this as a "Local Experiment" case study to
     complement Ouroboros.
   - [[AI & Machine Learning/Deep Learning/NLP/LLM/prompt engineering, context engineering, adversarial prompting]] — Crucial for "Tool Use" safety. Link
     this in the Prerequisites.

  ---


  📄 Risks of Artificial Intelligence (TODO).md
  Improvements:
   - Diversify Perspectives: The note is very MIRI-heavy (which is good for rigor). However, adding a section for "Prosaic AI Safety" (e.g., Paul
     Christiano's approach) would provide a broader view of the field.
   - Finish the "Sydney" Section: This is a classic example of "shoggoth" behavior. Summarize the NYT transcript mentioned to illustrate how RLHF can fail
     to suppress underlying "base model" behaviors.
   - Refine "Arguments for Optimism": Expand on the "Ricardo’s Law" breakdown. This is a very strong analytical point that deserves its own sub-heading.


  Suggested connections:
   - [[AI & Machine Learning/Research Fields/Explanable AI (XAI) (TODO)]] — You already have a placeholder; linking it here highlights the "Black Box" risk.
   - [[Information Security/Machine Learning & AI/Poisoning models]] — Connects speculative "X-risk" to current, demonstrated technical vulnerabilities.
     Place this link in the "Sleeper Agent" or "Outer Alignment" section.
  • It hallucinated and failed to apply “meta-cognition” to this note. Gemini wrote on a previous prompt the “intuitive anchoring”, not me.
  • The tip of creating a summary table and list of best practices is a good tip, i think i will do it after i have exausthed the experiments.

I don’t like to rely too much on this prompt. I think it’s better to first think to yourself for connections, and only after ask gemini (because ofcourse you can’t remember everything you wrote, even years ago, in a digital garden).

  • Gemini Cli hallucinated by suggesting to add a link (on prompt engineering) that already exists
  • Gemini is a bit too creative on finding the connections, so the prompt need futher improvements.

Reading a pdf file an create university notes

I recently started new courses. One thing that was very boring and i desperately want to automate was to create a first draft of a university note where i copy pasted content from slides.

Usually my notetaking approach for university is the following:

  • Since slides are available, possible before following a certain lesson, make a draft note with atleast all the content of the pdf into a .md file
  • Follow the lesson and take notes
  • Integrate from books and online resources, make sure everything is clear and rewrite or deepen knowlege about difficult or not so much clear topics
  • Add connections to notes in my digital garden or atleast create new one which can serve as seed for futher deepening
  • Deep-dive into interesting subjects writing on new notes.

The first point is the most boring one but also the one i think it waste more time. However time in university is a scarse resource, courses choose a limited subset of arguments for a topic (that is often sufficient), so i think slides (which are often “enough” for my course) serve as base draft. The first two points alone would be sufficient to pass the exam. Since i don’t want to study from slides alone, i would add them to my digital garden. This however requires lot of time and most of it is necessarily (in my opinion) but not really “learnful”.

So, i tested Gemini in the following way:

  • Asked Claude to make a prompt by describing what are my needs. The prompt was added to GEMINI.MD
  • Now everything is ask for “read a pdf and make a university note”, gemini follows the prompt and make a new one.

I find some issues:

  • Bug: after gemini read some file and create a note, even if the context isn’t filled i.e is still empty at 97%, the AI agent becomes VERY SLOW in every new task, even basic QA not related to notes. Gemini-cli is already slow in default but this make it worst. By creating a new context i.e a new chat, this issue is solved.
    • Also “/clear” works
  • It still skips some contents, usually located in the middle of the file.
  • Another key issue in my opinion: gemini ignores images most of the time, it would be nice to have diagrams or images copy pasted, but i guess either we’re not there yet or i should code myself something in python or look for some additional “plugin”.

I find a bit uncomfortable the fact that the AI is capable of writing notes on arguments that i didn’t grasped yet. I used it to write notes before even the teacher explained them and before i even read the slides. But i know that this is just friction.

I don’t think an AI agent is capable of synthetizing information like humans do and getting new ideas and perspectives outside of what’s in their training data. So i guess a digital gardener (if they want!) can focus less on the single notes themselves, and more on the big picture and connections. This is similar to what a lot of people talk about in software engineer after Claude cowork and OpenClaw.

The potential for scaling is huge, but it’s a bit like introducing automatic gardening tools that automatically plant and take care of plants. Do you really need to scale your garden?

I see others digital gardens, and i see that there are tons of personal opinions and ideas. Mine is a bit more like wikipedia, and i expose less on certain topics, it is evident. However, as Agentic AI replace and make less unique and valuable this aspect, i think digital gardens move more on the personal opinion, big picture and connections of arguments rather than growing the single plants.

Write a soul.md file

I made gemini generate the following “soul.md” file that focus on the more philosophical approach of the ai agent to this digital garden.

Write parts of a note and then make flashcards

I make gemini write the “Initialization” parts in Training a neural network. I didn’t knew that “He Initialization” existed and i’m happy that i have discovered something new.

The next step would be make flashcards for a note.

I actually did it and we’re almost on the same level of notebooklm. Ofcourse the better the prompt, the better the result. I added a new capability to GEMINI that makes flashcard. I discovered that the better the prompt, the most constraint and detail you specify, the better results you get. However you must also know what you’re doing, i used flashcard for years as memorization technique so i know how to do them and what ones work and what not.

Ofcourse the same problems remains:

  • Even if you tell the agent to make note for every possible concept in the note, it will skip something
  • There will be inevitably some hallucinations, but this is not a problem because as you recall flashcards you can also fix them, i don’t assume they are correct and in fact i will check by reading also the correspondent sectio in my notes.

I ended up starting a new project: a gemini skill that uses AnkiConnect API and automatically upload & edit the flashcards gemini generated: agentic-skill-obsidian-to-anki-flashcards


Related blog article: Augmenting an Obsidian Digital Garden with Agentic AI - an experiment