syke
Health Gecti
- License — License: AGPL-3.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 39 GitHub stars
Code Basarisiz
- rm -rf — Recursive force deletion command in scripts/smoke-artifact-install.sh
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool provides a local-first, cross-platform agentic memory system. It syncs context and activity across multiple AI harnesses and chat sessions to improve continuity.
Security Assessment
The server processes and stores your local activity in an append-only timeline. It requires an external API key (e.g., OpenAI) to function, meaning it makes outbound network requests to third-party LLM providers. A code scan flagged a recursive force deletion command (`rm -rf`) inside a smoke test script. While this is common in development or testing scripts rather than core application code, it warrants a quick manual review of the file path before execution. No hardcoded secrets or dangerous explicit permissions were found. Overall risk rating: Medium.
Quality Assessment
The project is actively maintained, with repository updates pushed as recently as today. It is properly licensed under AGPL-3.0. Community trust is growing but remains relatively small, indicated by 39 GitHub stars. The documentation is clear, comprehensive, and provides straightforward installation steps.
Verdict
Use with caution.
dynamic identity - agentic memory - your digital extension for all your AI
Syke
Syke is a local-first cross-harness agentic memory. It works with the AI harnesses you already use, keeps them in sync, and adapts its memory design to match your usage patterns.
AI-enabled work now means many sessions running in parallel across different harnesses, agents, repos, and chats. A decision made in one place often fails to reach the next place where it matters. Syke connects those threads so context carries forward.
That improves continuity and memory. It also makes the harnesses more efficient. They can rely more on what has already been discovered, decided, or learned, instead of starting over each time.
Syke introduces a kind of federation across your harnesses. Each harness can keep its own memory design and still rely on a general memory agent specialized in continuity across them. In that sense, Syke becomes a memory of memory.
Start Here
Use Syke directly
pipx install syke
syke auth set openai --api-key YOUR_KEY --model gpt-5-mini --use
syke setup
syke doctor
syke context
syke ask "What changed this week?"
syke setup reviews the current setup plan first. It ingests detected sources, runs initial synthesis when needed, and can enable background sync as a separate choice.
uv tool install
uv tool install syke
syke auth set openrouter --api-key YOUR_KEY --use
syke setup
From source
git clone https://github.com/saxenauts/syke.git && cd syke
uv sync --extra dev --locked
uv run syke auth set openai --api-key YOUR_KEY --model gpt-5-mini --use
uv run syke setup
Use Syke through an agent
Point the agent at this repo or the installed Syke skill, then let it drive the process.
If Syke is not set up yet, let the agent inspect the current plan with syke setup --json or syke setup, guide provider selection if needed, and finish setup first.
Once setup is done, a good agent workflow is:
- call
syke askfor deeper timeline and evidence-backed queries - call
syke contextwhen the current memex is enough - call
syke recordto write observations back into memory - call
syke statusfor a quick operational snapshot - call
syke doctoronly when setup or runtime looks wrong
After sync and synthesis, Syke refreshes its local attachments and can install its skill file into detected skill-capable agent directories.
Why this loop is trustworthy
Syke separates capture from inference. Supported local harnesses feed raw activity into an append-only events timeline. When Syke synthesizes memory or answers a question, it does so inside a local workspace where the events snapshot is read-only, the learned-memory store is writable, and the current memex is routed back out as additive context.
That gives you something simple to trust: a record you can inspect, a memory layer that carries forward, and a loop that stays local-first until ask or synthesis calls your configured provider.
How It Works
Claude Code Codex Hermes OpenCode
\ | | /
\ | | /
+-----------+-------------+-----------+
|
v
Observe: deterministic local ingest
no LLM before the ledger
|
v
+-------------------------------------------+
| events.db |
| immutable observed timeline |
| append-only evidence ledger |
+-------------------------------------------+
|
v
+-------------------------------------------+
| local Syke workspace |
| |
| read events.db snapshot |
| write syke.db learned memory |
| route MEMEX.md |
| |
| ask and synthesis run here |
+-------------------------------------------+
|
+--------------+---------------+
| |
v v
direct reads routed context
syke context syke ask
MEMEX.md
CLAUDE.md / AGENTS.md / SKILL.md
events.dbstores what happened.syke.dbstores what Syke currently believes.MEMEX.mdis the current map returned to future work.- The raw timeline stays separate from learned memory.
CLI
syke ask "question" # deeper timeline and evidence-backed recall
syke context # current memex, instant local read
syke record "note" # write an observation back into memory
syke status # quick operational snapshot
syke doctor # deeper diagnostic
syke setup # start or repair the system
syke sync # manual refresh and synthesis cycle
Use syke ask when the agent needs more than the current memex. Use syke context when the current memex is enough and speed matters. Use syke record after useful work so the next session inherits it.
syke status is the quick snapshot. syke doctor is the deeper repair path.
syke daemon start
syke daemon stop
syke daemon status
syke daemon logs
Platforms
Syke discovers supported local harnesses from descriptor files and their expected local paths. During setup, it scans those paths, checks what is actually present on disk, and ingests what it finds.
Supported local harnesses today:
- Claude Code: sessions, tools, projects, branches
- Codex: sessions, prompts, tool and model metadata
- Hermes: distribution and harness events
- OpenCode: sessions and model metadata
Current active discovery roots in code include:
~/.claude/projects~/.claude/transcripts~/.codex~/.hermes~/.local/share/opencode
All ingestion is local-first. Syke reads these surfaces from local files and local databases.
When a supported harness exposes a native skill directory, Syke can also install its SKILL.md there as part of distribution.
For supported harnesses, setup can bootstrap or repair missing adapters before the first ingest pass. It does this through the Observe factory, which reads local samples, generates or repairs an adapter, validates it, and deploys it into the user adapter directory.
If your harness layout is unusual, or if you want to connect a new harness yourself, use:
syke connect /path/to/your/harness
The factory auto-detects JSONL versus SQLite, uses the appropriate adapter path, and writes the result into Syke's local adapters directory. In practice, that means your agent can usually connect a new harness by pointing Syke at the real local path and following the contract.
Privacy and ownership
Canonical user stores live under ~/.syke/data/{user}/. The workspace mirrors current state locally for synthesis and ask flows.
events.dbis the immutable observed ledger.syke.dbis the learned-memory store.MEMEX.mdis the current memex returned to future sessions.- A content filter strips API keys, OAuth tokens, credential patterns, and private message bodies before ingest.
- Network calls go only to your configured LLM provider during ask and synthesis.
Users should have one place under their control for the scattered material their harnesses leave behind.
What changes when Syke is running
The simplest change is that your agents stop starting from blank.
A decision made in one harness can show up in the next place where it matters. A useful pattern does not have to stay trapped inside one session. A question like "what did I ship today?" can be answered from accumulated work instead of being rebuilt from scratch.
The bigger bet is that memory management itself should improve from use. Syke keeps the raw timeline separate from learned memory, then uses synthesis to keep reshaping the memex as a map. Over time, that lets the system learn better routes through a user's own history instead of forcing one fixed memory schema on everyone.
One controlled example: on February 26, 2026, the same question was asked against the same codebase in the same minute: "What did I ship today?" Manual multi-agent orchestration was compared with syke ask.
| Metric | Result |
|---|---|
| Token usage | 55% fewer tokens, from 970K to 431K |
| User-facing calls | 96% fewer calls, from 51 to 2 |
| Agents spawned | 3 to 0 |
This is one measured example from one workflow on one date. Freshness still has a gap of up to 15 minutes. The current claim is narrower and more useful: continuity can reduce reconstruction, and memory can get better at routing through repeated use.
Learn More
Start here
Runtime and internals
Story and evolution
Provider examplessyke auth set openai --api-key YOUR_KEY --model gpt-5-mini --use
syke auth set openrouter --api-key YOUR_KEY --use
syke auth use codex
syke auth set zai --api-key KEY --use
syke auth set kimi --api-key KEY --use
syke auth set azure --api-key KEY --endpoint URL --model MODEL --use
syke auth set ollama --model llama3.2 --use
syke auth set vllm --base-url URL --model MODEL --use
syke auth set llama-cpp --base-url URL --model MODEL --use
AGPL-3.0-only
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi