Compare commits

...

3 Commits

Author SHA1 Message Date
David
c1fe23f9ae Merge branch 'dev' into BOOKING_USER_MANAGEMENT 2025-10-08 21:14:44 +02:00
David
44d38e3fc2 fix ci
Some checks failed
CI / Lint & Format Check (push) Failing after 5s
CI / Test Backend (push) Failing after 6s
CI / Build Backend (push) Has been skipped
CI / Test Frontend (push) Failing after 6s
CI / Build Frontend (push) Has been skipped
Security Audit / Dependency Review (push) Has been skipped
Security Audit / npm audit (push) Failing after 7s
2025-10-08 21:12:34 +02:00
David
e1a43bcee1 fix claude
Some checks failed
CI / Lint & Format Check (push) Failing after 5s
CI / Test Backend (push) Failing after 7s
CI / Build Backend (push) Has been skipped
CI / Test Frontend (push) Failing after 6s
Security Audit / Dependency Review (push) Has been skipped
CI / Build Frontend (push) Has been skipped
Security Audit / npm audit (push) Failing after 5s
2025-10-08 21:11:23 +02:00
14 changed files with 867 additions and 21 deletions

77
.claude/CLAUDE.md Normal file
View File

@ -0,0 +1,77 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## User Configuration Directory
This is the Claude Code configuration directory (`~/.claude`) containing user settings, project data, custom commands, and security configurations.
## Security System
The system includes a comprehensive security validation hook:
- **Command Validation**: `/Users/david/.claude/scripts/validate-command.js` - A Bun-based security script that validates commands before execution
- **Protected Operations**: Blocks dangerous commands like `rm -rf /`, system modifications, privilege escalation, network tools, and malicious patterns
- **Security Logging**: Events are logged to `/Users/melvynx/.claude/security.log` for audit trails
- **Fail-Safe Design**: Script blocks execution on any validation errors or script failures
The security system is automatically triggered by the PreToolUse hook configured in `settings.json`.
## Custom Commands
Three workflow commands are available in the `/commands` directory:
### `/run-task` - Complete Feature Implementation
Workflow for implementing features from requirements:
1. Analyze file paths or GitHub issues (using `gh cli`)
2. Create implementation plan
3. Execute updates with TypeScript validation
4. Auto-commit changes
5. Create pull request
### `/fix-pr-comments` - PR Comment Resolution
Workflow for addressing pull request feedback:
1. Fetch unresolved comments using `gh cli`
2. Plan required modifications
3. Update files accordingly
4. Commit and push changes
### `/explore-and-plan` - EPCT Development Workflow
Structured approach using parallel subagents:
1. **Explore**: Find and read relevant files
2. **Plan**: Create detailed implementation plan with web research if needed
3. **Code**: Implement following existing patterns and run autoformatting
4. **Test**: Execute tests and verify functionality
5. Write up work as PR description
## Status Line
Custom status line script (`statusline-ccusage.sh`) displays:
- Git branch with pending changes (+added/-deleted lines)
- Current directory name
- Model information
- Session costs and daily usage (if `ccusage` tool available)
- Active block costs and time remaining
- Token usage for current session
## Hooks and Audio Feedback
- **Stop Hook**: Plays completion sound (`finish.mp3`) when tasks complete
- **Notification Hook**: Plays notification sound (`need-human.mp3`) for user interaction
- **Pre-tool Validation**: All Bash commands are validated by the security script
## Project Data Structure
- `projects/`: Contains conversation history in JSONL format organized by directory paths
- `todos/`: Agent-specific todo lists for task tracking
- `shell-snapshots/`: Shell state snapshots for session management
- `statsig/`: Analytics and feature flagging data
## Permitted Commands
The system allows specific command patterns without additional validation:
- `git *` - All Git operations
- `npm run *` - NPM script execution
- `pnpm *` - PNPM package manager
- `gh *` - GitHub CLI operations
- Standard file operations (`cd`, `ls`, `node`)

View File

@ -0,0 +1,36 @@
---
description: Explore codebase, create implementation plan, code, and test following EPCT workflow
---
# Explore, Plan, Code, Test Workflow
At the end of this message, I will ask you to do something.
Please follow the "Explore, Plan, Code, Test" workflow when you start.
## Explore
First, use parallel subagents to find and read all files that may be useful for implementing the ticket, either as examples or as edit targets. The subagents should return relevant file paths, and any other info that may be useful.
## Plan
Next, think hard and write up a detailed implementation plan. Don't forget to include tests, lookbook components, and documentation. Use your judgement as to what is necessary, given the standards of this repo.
If there are things you are not sure about, use parallel subagents to do some web research. They should only return useful information, no noise.
If there are things you still do not understand or questions you have for the user, pause here to ask them before continuing.
## Code
When you have a thorough implementation plan, you are ready to start writing code. Follow the style of the existing codebase (e.g. we prefer clearly named variables and methods to extensive comments). Make sure to run our autoformatting script when you're done, and fix linter warnings that seem reasonable to you.
## Test
Use parallel subagents to run tests, and make sure they all pass.
If your changes touch the UX in a major way, use the browser to make sure that everything works correctly. Make a list of what to test for, and use a subagent for this step.
If your testing shows problems, go back to the planning stage and think ultrahard.
## Write up your work
When you are happy with your work, write up a short report that could be used as the PR description. Include what you set out to do, the choices you made with their brief justification, and any commands you ran in the process that may be useful for future developers to know about.

View File

@ -0,0 +1,10 @@
---
description: Fetch all comments for the current pull request and fix them.
---
Workflow:
1. Use `gh cli` to fetch the comments that are NOT resolved from the pull request.
2. Define all the modifications you should actually make.
3. Act and update the files.
4. Create a commit and push.

View File

@ -0,0 +1,36 @@
---
description: Quickly commit all changes with an auto-generated message
---
Workflow for quick Git commits:
1. Check git status to see what changes are present
2. Analyze changes to generate a short, clear commit message
3. Stage all changes (tracked and untracked files)
4. Create the commit with DH7789-dev signature
5. Optionally push to remote if tracking branch exists
The commit message will be automatically generated by analyzing:
- Modified files and their purposes (components, configs, tests, docs, etc.)
- New files added and their function
- Deleted files and cleanup operations
- Overall scope of changes to determine action verb (add, update, fix, refactor, remove, etc.)
Commit message format: `[action] [what was changed]`
Examples:
- `add user authentication system`
- `fix navigation menu responsive issues`
- `update API endpoints configuration`
- `refactor database connection logic`
- `remove deprecated utility functions`
This command is ideal for:
- Quick iteration cycles
- Work-in-progress commits
- Feature development checkpoints
- Bug fix commits
The commit will include your custom signature:
```
Signed-off-by: DH7789-dev
```

View File

@ -0,0 +1,21 @@
---
description: Run a task
---
For the given $ARGUMENTS you need to get the information about the tasks you need to do :
- If it's a file path, get the path to get the instructions and the feature we want to create
- If it's an issues number or URL, fetch the issues to get the information (with `gh cli`)
1. Start to make a plan about how to make the feature
You need to fetch all the files needed and more, find what to update, think like a real engineer that will check everything to prepare the best plan.
2. Make the update
Update the files according to your plan.
Auto correct yourself with TypeScript. Run TypeScript check and find a way everything is clean and working.
3. Commit the changes
Commit directly your updates.
4. Create a pull request
Create a perfect pull request with all the data needed to review your code.

1
.claude/ide/56915.lock Normal file
View File

@ -0,0 +1 @@
{"pid":1302,"workspaceFolders":["/Users/david/Documents/xpeditis/dev/xpeditis2.0"],"ideName":"Visual Studio Code","transport":"ws","runningInWindows":false,"authToken":"e6afcbfb-250a-4671-a493-5060a830f6e9"}

View File

@ -0,0 +1,3 @@
{
"repositories": {}
}

View File

@ -0,0 +1,424 @@
#!/usr/bin/env bun
/**
* Claude Code "Before Tools" Hook - Command Validation Script
*
* This script validates commands before execution to prevent harmful operations.
* It receives command data via stdin and returns exit code 0 (allow) or 1 (block).
*
* Usage: Called automatically by Claude Code PreToolUse hook
* Manual test: echo '{"tool_name":"Bash","tool_input":{"command":"rm -rf /"}}' | bun validate-command.js
*/
// Comprehensive dangerous command patterns database
const SECURITY_RULES = {
// Critical system destruction commands
CRITICAL_COMMANDS: [
"del",
"format",
"mkfs",
"shred",
"dd",
"fdisk",
"parted",
"gparted",
"cfdisk",
],
// Privilege escalation and system access
PRIVILEGE_COMMANDS: [
"sudo",
"su",
"passwd",
"chpasswd",
"usermod",
"chmod",
"chown",
"chgrp",
"setuid",
"setgid",
],
// Network and remote access tools
NETWORK_COMMANDS: [
"nc",
"netcat",
"nmap",
"telnet",
"ssh-keygen",
"iptables",
"ufw",
"firewall-cmd",
"ipfw",
],
// System service and process manipulation
SYSTEM_COMMANDS: [
"systemctl",
"service",
"kill",
"killall",
"pkill",
"mount",
"umount",
"swapon",
"swapoff",
],
// Dangerous regex patterns
DANGEROUS_PATTERNS: [
// File system destruction - block rm -rf with absolute paths
/rm\s+.*-rf\s*\/\s*$/i, // rm -rf ending at root directory
/rm\s+.*-rf\s*\/\w+/i, // rm -rf with any absolute path
/rm\s+.*-rf\s*\/etc/i, // rm -rf in /etc
/rm\s+.*-rf\s*\/usr/i, // rm -rf in /usr
/rm\s+.*-rf\s*\/bin/i, // rm -rf in /bin
/rm\s+.*-rf\s*\/sys/i, // rm -rf in /sys
/rm\s+.*-rf\s*\/proc/i, // rm -rf in /proc
/rm\s+.*-rf\s*\/boot/i, // rm -rf in /boot
/rm\s+.*-rf\s*\/home\/[^\/]*\s*$/i, // rm -rf entire home directory
/rm\s+.*-rf\s*\.\.+\//i, // rm -rf with parent directory traversal
/rm\s+.*-rf\s*\*.*\*/i, // rm -rf with multiple wildcards
/rm\s+.*-rf\s*\$\w+/i, // rm -rf with variables (could be dangerous)
/>\s*\/dev\/(sda|hda|nvme)/i,
/dd\s+.*of=\/dev\//i,
/shred\s+.*\/dev\//i,
/mkfs\.\w+\s+\/dev\//i,
// Fork bomb and resource exhaustion
/:\(\)\{\s*:\|:&\s*\};:/,
/while\s+true\s*;\s*do.*done/i,
/for\s*\(\(\s*;\s*;\s*\)\)/i,
// Command injection and chaining
/;\s*(rm|dd|mkfs|format)/i,
/&&\s*(rm|dd|mkfs|format)/i,
/\|\|\s*(rm|dd|mkfs|format)/i,
// Remote code execution
/\|\s*(sh|bash|zsh|fish)$/i,
/(wget|curl)\s+.*\|\s*(sh|bash)/i,
/(wget|curl)\s+.*-O-.*\|\s*(sh|bash)/i,
// Command substitution with dangerous commands
/`.*rm.*`/i,
/\$\(.*rm.*\)/i,
/`.*dd.*`/i,
/\$\(.*dd.*\)/i,
// Sensitive file access
/cat\s+\/etc\/(passwd|shadow|sudoers)/i,
/>\s*\/etc\/(passwd|shadow|sudoers)/i,
/echo\s+.*>>\s*\/etc\/(passwd|shadow|sudoers)/i,
// Network exfiltration
/\|\s*nc\s+\S+\s+\d+/i,
/curl\s+.*-d.*\$\(/i,
/wget\s+.*--post-data.*\$\(/i,
// Log manipulation
/>\s*\/var\/log\//i,
/rm\s+\/var\/log\//i,
/echo\s+.*>\s*~?\/?\.bash_history/i,
// Backdoor creation
/nc\s+.*-l.*-e/i,
/nc\s+.*-e.*-l/i,
/ncat\s+.*--exec/i,
/ssh-keygen.*authorized_keys/i,
// Crypto mining and malicious downloads
/(wget|curl).*\.(sh|py|pl|exe|bin).*\|\s*(sh|bash|python)/i,
/(xmrig|ccminer|cgminer|bfgminer)/i,
// Hardware direct access
/cat\s+\/dev\/(mem|kmem)/i,
/echo\s+.*>\s*\/dev\/(mem|kmem)/i,
// Kernel module manipulation
/(insmod|rmmod|modprobe)\s+/i,
// Cron job manipulation
/crontab\s+-e/i,
/echo\s+.*>>\s*\/var\/spool\/cron/i,
// Environment variable exposure
/env\s*\|\s*grep.*PASSWORD/i,
/printenv.*PASSWORD/i,
],
// Paths that should never be written to
PROTECTED_PATHS: [
"/etc/",
"/usr/",
"/bin/",
"/sbin/",
"/boot/",
"/sys/",
"/proc/",
"/dev/",
"/root/",
],
};
// Allowlist of safe commands (when used appropriately)
const SAFE_COMMANDS = [
"ls",
"dir",
"pwd",
"whoami",
"date",
"echo",
"cat",
"head",
"tail",
"grep",
"find",
"wc",
"sort",
"uniq",
"cut",
"awk",
"sed",
"git",
"npm",
"pnpm",
"node",
"bun",
"python",
"pip",
"cd",
"cp",
"mv",
"mkdir",
"touch",
"ln",
];
class CommandValidator {
constructor() {
this.logFile = "/Users/david/.claude/security.log";
}
/**
* Main validation function
*/
validate(command, toolName = "Unknown") {
const result = {
isValid: true,
severity: "LOW",
violations: [],
sanitizedCommand: command,
};
if (!command || typeof command !== "string") {
result.isValid = false;
result.violations.push("Invalid command format");
return result;
}
// Normalize command for analysis
const normalizedCmd = command.trim().toLowerCase();
const cmdParts = normalizedCmd.split(/\s+/);
const mainCommand = cmdParts[0];
// Check against critical commands
if (SECURITY_RULES.CRITICAL_COMMANDS.includes(mainCommand)) {
result.isValid = false;
result.severity = "CRITICAL";
result.violations.push(`Critical dangerous command: ${mainCommand}`);
}
// Check privilege escalation commands
if (SECURITY_RULES.PRIVILEGE_COMMANDS.includes(mainCommand)) {
result.isValid = false;
result.severity = "HIGH";
result.violations.push(`Privilege escalation command: ${mainCommand}`);
}
// Check network commands
if (SECURITY_RULES.NETWORK_COMMANDS.includes(mainCommand)) {
result.isValid = false;
result.severity = "HIGH";
result.violations.push(`Network/remote access command: ${mainCommand}`);
}
// Check system commands
if (SECURITY_RULES.SYSTEM_COMMANDS.includes(mainCommand)) {
result.isValid = false;
result.severity = "HIGH";
result.violations.push(`System manipulation command: ${mainCommand}`);
}
// Check dangerous patterns
for (const pattern of SECURITY_RULES.DANGEROUS_PATTERNS) {
if (pattern.test(command)) {
result.isValid = false;
result.severity = "CRITICAL";
result.violations.push(`Dangerous pattern detected: ${pattern.source}`);
}
}
// Check for protected path access (but allow common redirections like /dev/null)
for (const path of SECURITY_RULES.PROTECTED_PATHS) {
if (command.includes(path)) {
// Allow common safe redirections
if (path === "/dev/" && (command.includes("/dev/null") || command.includes("/dev/stderr") || command.includes("/dev/stdout"))) {
continue;
}
result.isValid = false;
result.severity = "HIGH";
result.violations.push(`Access to protected path: ${path}`);
}
}
// Additional safety checks
if (command.length > 2000) {
result.isValid = false;
result.severity = "MEDIUM";
result.violations.push("Command too long (potential buffer overflow)");
}
// Check for binary/encoded content
if (/[\x00-\x08\x0B\x0C\x0E-\x1F\x7F-\xFF]/.test(command)) {
result.isValid = false;
result.severity = "HIGH";
result.violations.push("Binary or encoded content detected");
}
return result;
}
/**
* Log security events
*/
async logSecurityEvent(command, toolName, result, sessionId = null) {
const timestamp = new Date().toISOString();
const logEntry = {
timestamp,
sessionId,
toolName,
command: command.substring(0, 500), // Truncate for logs
blocked: !result.isValid,
severity: result.severity,
violations: result.violations,
source: "claude-code-hook",
};
try {
// Write to log file
const logLine = JSON.stringify(logEntry) + "\n";
await Bun.write(this.logFile, logLine, { createPath: true, flag: "a" });
// Also output to stderr for immediate visibility
console.error(
`[SECURITY] ${
result.isValid ? "ALLOWED" : "BLOCKED"
}: ${command.substring(0, 100)}`
);
} catch (error) {
console.error("Failed to write security log:", error);
}
}
/**
* Check if command matches any allowed patterns from settings
*/
isExplicitlyAllowed(command, allowedPatterns = []) {
for (const pattern of allowedPatterns) {
// Convert Claude Code permission pattern to regex
// e.g., "Bash(git *)" becomes /^git\s+.*$/
if (pattern.startsWith("Bash(") && pattern.endsWith(")")) {
const cmdPattern = pattern.slice(5, -1); // Remove "Bash(" and ")"
const regex = new RegExp(
"^" + cmdPattern.replace(/\*/g, ".*") + "$",
"i"
);
if (regex.test(command)) {
return true;
}
}
}
return false;
}
}
/**
* Main execution function
*/
async function main() {
const validator = new CommandValidator();
try {
// Read hook input from stdin
const stdin = process.stdin;
const chunks = [];
for await (const chunk of stdin) {
chunks.push(chunk);
}
const input = Buffer.concat(chunks).toString();
if (!input.trim()) {
console.error("No input received from stdin");
process.exit(1);
}
// Parse Claude Code hook JSON format
let hookData;
try {
hookData = JSON.parse(input);
} catch (error) {
console.error("Invalid JSON input:", error.message);
process.exit(1);
}
const toolName = hookData.tool_name || "Unknown";
const toolInput = hookData.tool_input || {};
const sessionId = hookData.session_id || null;
// Only validate Bash commands for now
if (toolName !== "Bash") {
console.log(`Skipping validation for tool: ${toolName}`);
process.exit(0);
}
const command = toolInput.command;
if (!command) {
console.error("No command found in tool input");
process.exit(1);
}
// Validate the command
const result = validator.validate(command, toolName);
// Log the security event
await validator.logSecurityEvent(command, toolName, result, sessionId);
// Output result and exit with appropriate code
if (result.isValid) {
console.log("Command validation passed");
process.exit(0); // Allow execution
} else {
console.error(
`Command validation failed: ${result.violations.join(", ")}`
);
console.error(`Severity: ${result.severity}`);
process.exit(2); // Block execution (Claude Code requires exit code 2)
}
} catch (error) {
console.error("Validation script error:", error);
// Fail safe - block execution on any script error
process.exit(2);
}
}
// Execute main function
main().catch((error) => {
console.error("Fatal error:", error);
process.exit(2);
});

63
.claude/settings.json Normal file
View File

@ -0,0 +1,63 @@
{
"permissions": {
"allow": [
"Edit",
"Bash(npm run :*)",
"Bash(git :*)",
"Bash(pnpm :*)",
"Bash(gh :*)",
"Bash(cd :*)",
"Bash(ls :*)",
"Bash(node :*)",
"Bash(mkdir:*)",
"Bash(npm init:*)",
"Bash(npm install:*)",
"Bash(node:*)",
"Bash(npm --version)",
"Bash(docker:*)",
"Bash(test:*)",
"Bash(cat:*)",
"Bash(npm run build:*)"
]
},
"statusLine": {
"type": "command",
"command": "bash /Users/david/.claude/statusline-ccusage.sh",
"padding": 0
},
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bun /Users/david/.claude/scripts/validate-command.js"
}
]
}
],
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "afplay /Users/david/.claude/song/finish.mp3"
}
]
}
],
"Notification": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "afplay /Users/david/.claude/song/need-human.mp3"
}
]
}
]
}
}

View File

@ -1,19 +0,0 @@
{
"permissions": {
"allow": [
"Bash(mkdir:*)",
"Bash(npm init:*)",
"Bash(npm install:*)",
"Bash(node:*)",
"Bash(npm --version)",
"Bash(docker:*)",
"Bash(test:*)",
"Bash(cat:*)",
"Bash(npm run build:*)",
"Bash(npm test:*)",
"Bash(npm run test:integration:*)"
],
"deny": [],
"ask": []
}
}

BIN
.claude/song/finish.mp3 Normal file

Binary file not shown.

BIN
.claude/song/need-human.mp3 Normal file

Binary file not shown.

View File

@ -0,0 +1,194 @@
#!/bin/bash
# ANSI color codes
GREEN='\033[0;32m'
RED='\033[0;31m'
PURPLE='\033[0;35m'
GRAY='\033[0;90m'
LIGHT_GRAY='\033[0;37m'
RESET='\033[0m'
# Read JSON input from stdin
input=$(cat)
# Extract current session ID and model info from Claude Code input
session_id=$(echo "$input" | jq -r '.session_id // empty')
model_name=$(echo "$input" | jq -r '.model.display_name // empty')
current_dir=$(echo "$input" | jq -r '.workspace.current_dir // empty')
cwd=$(echo "$input" | jq -r '.cwd // empty')
# Get current git branch with error handling
if git rev-parse --git-dir >/dev/null 2>&1; then
branch=$(git branch --show-current 2>/dev/null || echo "detached")
if [ -z "$branch" ]; then
branch="detached"
fi
# Check for pending changes (staged or unstaged)
if ! git diff-index --quiet HEAD -- 2>/dev/null || ! git diff-index --quiet --cached HEAD -- 2>/dev/null; then
# Get line changes for unstaged and staged changes
unstaged_stats=$(git diff --numstat 2>/dev/null | awk '{added+=$1; deleted+=$2} END {print added+0, deleted+0}')
staged_stats=$(git diff --cached --numstat 2>/dev/null | awk '{added+=$1; deleted+=$2} END {print added+0, deleted+0}')
# Parse the stats
unstaged_added=$(echo $unstaged_stats | cut -d' ' -f1)
unstaged_deleted=$(echo $unstaged_stats | cut -d' ' -f2)
staged_added=$(echo $staged_stats | cut -d' ' -f1)
staged_deleted=$(echo $staged_stats | cut -d' ' -f2)
# Total changes
total_added=$((unstaged_added + staged_added))
total_deleted=$((unstaged_deleted + staged_deleted))
# Build the branch display with changes (with colors)
changes=""
if [ $total_added -gt 0 ]; then
changes="${GREEN}+$total_added${RESET}"
fi
if [ $total_deleted -gt 0 ]; then
if [ -n "$changes" ]; then
changes="$changes ${RED}-$total_deleted${RESET}"
else
changes="${RED}-$total_deleted${RESET}"
fi
fi
if [ -n "$changes" ]; then
branch="$branch${PURPLE}*${RESET} ($changes)"
else
branch="$branch${PURPLE}*${RESET}"
fi
fi
else
branch="no-git"
fi
# Get basename of current directory
dir_name=$(basename "$current_dir")
# Get today's date in YYYYMMDD format
today=$(date +%Y%m%d)
# Function to format numbers
format_cost() {
printf "%.2f" "$1"
}
format_tokens() {
local tokens=$1
if [ "$tokens" -ge 1000000 ]; then
printf "%.1fM" "$(echo "scale=1; $tokens / 1000000" | bc -l)"
elif [ "$tokens" -ge 1000 ]; then
printf "%.1fK" "$(echo "scale=1; $tokens / 1000" | bc -l)"
else
printf "%d" "$tokens"
fi
}
format_time() {
local minutes=$1
local hours=$((minutes / 60))
local mins=$((minutes % 60))
if [ "$hours" -gt 0 ]; then
printf "%dh %dm" "$hours" "$mins"
else
printf "%dm" "$mins"
fi
}
# Initialize variables with defaults
session_cost="0.00"
session_tokens=0
daily_cost="0.00"
block_cost="0.00"
remaining_time="N/A"
# Get current session data by finding the session JSONL file
if command -v ccusage >/dev/null 2>&1 && [ -n "$session_id" ] && [ "$session_id" != "empty" ]; then
# Look for the session JSONL file in Claude project directories
session_jsonl_file=""
# Check common Claude paths
claude_paths=(
"$HOME/.config/claude"
"$HOME/.claude"
)
for claude_path in "${claude_paths[@]}"; do
if [ -d "$claude_path/projects" ]; then
# Use find to search for the session file
session_jsonl_file=$(find "$claude_path/projects" -name "${session_id}.jsonl" -type f 2>/dev/null | head -1)
if [ -n "$session_jsonl_file" ]; then
break
fi
fi
done
# Parse the session file if found
if [ -n "$session_jsonl_file" ] && [ -f "$session_jsonl_file" ]; then
# Count lines and estimate cost (simple approximation)
# Each line is a usage entry, we can count tokens and estimate
session_tokens=0
session_entries=0
while IFS= read -r line; do
if [ -n "$line" ]; then
session_entries=$((session_entries + 1))
# Extract token usage from message.usage field (only count input + output tokens)
# Cache tokens shouldn't be added up as they're reused/shared across messages
input_tokens=$(echo "$line" | jq -r '.message.usage.input_tokens // 0' 2>/dev/null || echo "0")
output_tokens=$(echo "$line" | jq -r '.message.usage.output_tokens // 0' 2>/dev/null || echo "0")
line_tokens=$((input_tokens + output_tokens))
session_tokens=$((session_tokens + line_tokens))
fi
done < "$session_jsonl_file"
# Use ccusage statusline to get the accurate cost for this session
ccusage_statusline=$(echo "$input" | ccusage statusline 2>/dev/null)
current_session_cost=$(echo "$ccusage_statusline" | sed -n 's/.*💰 \([^[:space:]]*\) session.*/\1/p')
if [ -n "$current_session_cost" ] && [ "$current_session_cost" != "N/A" ]; then
session_cost=$(echo "$current_session_cost" | sed 's/\$//g')
fi
fi
fi
if command -v ccusage >/dev/null 2>&1; then
# Get daily data
daily_data=$(ccusage daily --json --since "$today" 2>/dev/null)
if [ $? -eq 0 ] && [ -n "$daily_data" ]; then
daily_cost=$(echo "$daily_data" | jq -r '.totals.totalCost // 0')
fi
# Get active block data
block_data=$(ccusage blocks --active --json 2>/dev/null)
if [ $? -eq 0 ] && [ -n "$block_data" ]; then
active_block=$(echo "$block_data" | jq -r '.blocks[] | select(.isActive == true) // empty')
if [ -n "$active_block" ] && [ "$active_block" != "null" ]; then
block_cost=$(echo "$active_block" | jq -r '.costUSD // 0')
remaining_minutes=$(echo "$active_block" | jq -r '.projection.remainingMinutes // 0')
if [ "$remaining_minutes" != "0" ] && [ "$remaining_minutes" != "null" ]; then
remaining_time=$(format_time "$remaining_minutes")
fi
fi
fi
fi
# Format the output
formatted_session_cost=$(format_cost "$session_cost")
formatted_daily_cost=$(format_cost "$daily_cost")
formatted_block_cost=$(format_cost "$block_cost")
formatted_tokens=$(format_tokens "$session_tokens")
# Build the status line with colors (light gray as default)
status_line="${LIGHT_GRAY}🌿 $branch ${GRAY}|${LIGHT_GRAY} 📁 $dir_name ${GRAY}|${LIGHT_GRAY} 🤖 $model_name ${GRAY}|${LIGHT_GRAY} 💰 \$$formatted_session_cost ${GRAY}/${LIGHT_GRAY} 📅 \$$formatted_daily_cost ${GRAY}/${LIGHT_GRAY} 🧊 \$$formatted_block_cost"
if [ "$remaining_time" != "N/A" ]; then
status_line="$status_line ($remaining_time left)"
fi
status_line="$status_line ${GRAY}|${LIGHT_GRAY} 🧩 ${formatted_tokens} ${GRAY}tokens${RESET}"
printf "%b\n" "$status_line"

View File

@ -2,9 +2,9 @@ name: CI
on:
push:
branches: [main, develop]
branches: [main, dev]
pull_request:
branches: [main, develop]
branches: [main, dev]
jobs:
lint-and-format: