banner
约 1,800 字
6 分钟

Antigravity MCP 踩坑记录:本地 Auth State 清不干净怎么办

摘要

记录 Antigravity MCP OAuth 排障过程。旧 Client ID 清不掉的根源在于本地 provider 级别的认证状态被复用。本文总结了典型症状、真实的缓存存放位置,并提供了一键清理的 Python 脚本。

Antigravity 的 MCP OAuth 状态一旦出错,排查会非常痛苦。

这次遇到的坑:重启 Antigravity 没用,自以为清掉了旧的认证状态,结果它还是在反复复用。

典型症状

  • 重启 Antigravity 无法解决,重新连接 MCP 依然失败。

  • 浏览器授权页始终带着一个旧的或服务端不认识的 client_id

  • 抛出迷惑报错:

纯文本
Invalid client. The clientId provided does not match to this client.
  • 日志里看到:

纯文本
Creating session for scopes:
Opening authorization URL for scopes:

日志这两行后面是空的,说明发起授权时连 scope 都没带。

排查弯路:别盯着旧 client_id

最开始以为是某个旧 client_id 没删干净,一直按旧 ID 搜,但方向错了。

核心问题是整个 provider 的本地认证状态没清干净。

只要 provider 级别的状态还在,Antigravity 重新连接时就会复用里头的动态注册信息,导致旧状态阴魂不散。

真正的状态存放位置

如果是 Windows + WSL 环境,别找 ~/.antigravity.gemini 目录了。状态其实在 Windows 用户目录下:

纯文本
%APPDATA%\Antigravity\User\globalStorage\state.vscdb
%APPDATA%\Antigravity\User\globalStorage\state.vscdb.backup
%APPDATA%\Antigravity\User\globalStorage\state.vscdb.bak

在 WSL 里通常对应:

纯文本
/mnt/c/Users/<你的用户名>/AppData/Roaming/Antigravity/User/globalStorage/

里面最值得盯的是:

  • dynamicAuthProviders

  • secret://...dynamicAuthProvider...

  • 任何包含 provider URL 的认证相关 key

清理思路:按 provider 整体清除

如果只按旧 client_id 搜索并删除,Antigravity 依然会靠本地保存的 provider 状态拿到一个新的动态 client_id。表现出来的就是:旧 ID 搜不到,但授权页还是带着一个不认识的 ID,让人误以为是服务端没清干净。

正确的做法是直接按 provider 清理。

比如 MCP 服务是:

纯文本
https://<your-domain>/

那就把这个 provider 相关的本地 auth state 整体清掉,包括:

  • dynamicAuthProviders 里的对应条目

  • 相关的 secret://... 记录

  • 这个 provider 当前关联出来的动态 clientId

这样下次重新连接时,才会走一轮干净的注册和授权流程。

自动清理脚本

提供一个 Python 脚本用于一键清理:

  • 默认 dry run(只读)

  • --apply 才会真实删除

  • 基于 provider URL 查找并清除关联状态

运行前必须先完全退出 Antigravity。

Python
import argparse
import json
import os
import shutil
import sqlite3
from datetime import datetime
from pathlib import Path

DEFAULT_PROVIDERS = [
    "https://<your-domain>/",
]

ROOT = Path(os.environ["APPDATA"]) / "Antigravity" / "User" / "globalStorage"

def now_tag():
    return datetime.now().strftime("%Y%m%d-%H%M%S")

def is_sqlite(path: Path) -> bool:
    try:
        with path.open("rb") as f:
            return f.read(16) == b"SQLite format 3\x00"
    except Exception:
        return False

def backup_file(path: Path) -> Path:
    backup = path.with_name(f"{path.name}.pre-clear-{now_tag()}.bak")
    shutil.copy2(path, backup)
    return backup

def decode_value(value) -> str:
    if isinstance(value, bytes):
        return value.decode("utf-8", "ignore")
    return str(value)

def candidate_files():
    names = {
        "state.vscdb",
        "state.vscdb.backup",
        "state.vscdb.bak",
    }
    files = []
    for path in ROOT.iterdir():
        if path.is_file() and path.name in names:
            files.append(path)
    return sorted(files)

def has_key_value_columns(cur, table: str) -> bool:
    cols = [row[1] for row in cur.execute(f"PRAGMA table_info({table})").fetchall()]
    return {"key", "value"}.issubset(cols)

def load_json_row(cur, table: str, key: str):
    row = cur.execute(
        f"SELECT value FROM {table} WHERE key = ?",
        (key,),
    ).fetchone()
    if not row:
        return None

    try:
        return json.loads(decode_value(row[0]))
    except Exception:
        return None

def discover_provider_entries(cur, table: str, providers: list[str]):
    items = load_json_row(cur, table, "dynamicAuthProviders")
    if not isinstance(items, list):
        return [], [], []

    matched = []
    client_ids = set()

    for item in items:
        if not isinstance(item, dict):
            continue

        provider_id = item.get("providerId")
        auth_server = item.get("authorizationServer")

        if provider_id in providers or auth_server in providers:
            matched.append(item)
            client_id = item.get("clientId")
            if isinstance(client_id, str) and client_id:
                client_ids.add(client_id)

    return matched, sorted(client_ids), items

def is_auth_like(key: str, value_text: str) -> bool:
    key_l = key.lower()
    value_l = value_text.lower()

    return (
        key.startswith("secret://")
        or "auth" in key_l
        or "oauth" in key_l
        or "clientregistration" in key_l
        or "dynamicauth" in key_l
        or '"authproviderid"' in value_l
        or '"clientid"' in value_l
        or '"isdynamicauthprovider"' in value_l
    )

def collect_actions(cur, table: str, providers: list[str]):
    matched_entries, client_ids, all_entries = discover_provider_entries(cur, table, providers)
    needles = set(providers) | set(client_ids)
    actions = []

    if matched_entries:
        kept_entries = [item for item in all_entries if item not in matched_entries]
        actions.append({
            "type": "update_json",
            "table": table,
            "key": "dynamicAuthProviders",
            "before_count": len(all_entries),
            "after_count": len(kept_entries),
            "after": kept_entries,
        })

    notifications = load_json_row(cur, table, "notifications.perSourceDoNotDisturbMode")
    if isinstance(notifications, list):
        kept = []
        removed = 0
        for item in notifications:
            item_id = item.get("id") if isinstance(item, dict) else None
            if item_id in providers:
                removed += 1
            else:
                kept.append(item)

        if removed:
            actions.append({
                "type": "update_json",
                "table": table,
                "key": "notifications.perSourceDoNotDisturbMode",
                "before_count": len(notifications),
                "after_count": len(kept),
                "after": kept,
            })

    rows = cur.execute(f"SELECT key, value FROM {table}").fetchall()
    for key, value in rows:
        if key in {"dynamicAuthProviders", "notifications.perSourceDoNotDisturbMode"}:
            continue

        value_text = decode_value(value)
        if not any(needle in key or needle in value_text for needle in needles):
            continue

        if is_auth_like(key, value_text):
            actions.append({
                "type": "delete_key",
                "table": table,
                "key": key,
            })

    deduped = {}
    for action in actions:
        deduped[(action["type"], action["table"], action["key"])] = action

    return list(deduped.values())

def apply_actions(cur, actions):
    for action in actions:
        if action["type"] == "delete_key":
            cur.execute(
                f"DELETE FROM {action['table']} WHERE key = ?",
                (action["key"],),
            )
        elif action["type"] == "update_json":
            cur.execute(
                f"UPDATE {action['table']} SET value = ? WHERE key = ?",
                (json.dumps(action["after"], separators=(",", ":")), action["key"]),
            )

def process_sqlite(path: Path, providers: list[str], apply_changes: bool):
    print(f"\n=== SQLITE: {path} ===")
    conn = sqlite3.connect(path)
    try:
        cur = conn.cursor()
        tables = [
            row[0]
            for row in cur.execute("SELECT name FROM sqlite_master WHERE type='table'").fetchall()
        ]

        all_actions = []
        found = False

        for table in tables:
            if not has_key_value_columns(cur, table):
                continue

            actions = collect_actions(cur, table, providers)
            if not actions:
                continue

            found = True
            print(f"TABLE: {table}")
            for action in actions:
                if action["type"] == "delete_key":
                    print(f"- DELETE {action['key']}")
                else:
                    print(
                        f"- UPDATE {action['key']} "
                        f"{action['before_count']} -> {action['after_count']}"
                    )

            all_actions.extend(actions)

        if not found:
            print("No matching auth state found.")
            return

        if not apply_changes:
            print("Mode: dry run, no changes applied.")
            return

        backup = backup_file(path)
        print(f"Backup created: {backup}")
        apply_actions(cur, all_actions)
        conn.commit()
        print("Applied.")

    except Exception:
        conn.rollback()
        raise
    finally:
        conn.close()

def process_raw_file(path: Path, providers: list[str], apply_changes: bool):
    print(f"\n=== RAW FILE: {path} ===")
    data = path.read_bytes()

    found = False
    for provider in providers:
        if provider.encode() in data:
            print(f"Found provider bytes: {provider}")
            found = True

    if not found:
        print("No provider match found.")
        return

    if not apply_changes:
        print("Mode: dry run, raw file not modified.")
        return

    backup = backup_file(path)
    disabled = path.with_name(f"{path.name}.disabled-{now_tag()}")
    path.rename(disabled)
    print(f"Backup created: {backup}")
    print(f"Renamed raw file to: {disabled}")

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--apply", action="store_true", help="Actually modify files")
    parser.add_argument(
        "--provider",
        action="append",
        dest="providers",
        help="Target provider URL, can be repeated",
    )
    args = parser.parse_args()

    providers = args.providers or DEFAULT_PROVIDERS

    print(f"Mode: {'APPLY' if args.apply else 'DRY RUN'}")
    print(f"Root: {ROOT}")
    print("Target providers:")
    for provider in providers:
        print(f"- {provider}")

    files = candidate_files()
    if not files:
        print("\nNo candidate files found.")
        return

    for path in files:
        if is_sqlite(path):
            process_sqlite(path, providers, args.apply)
        else:
            process_raw_file(path, providers, args.apply)

    print("\nDone.")
    print("After --apply: relaunch Antigravity and reconnect the MCP server.")

if __name__ == "__main__":
    main()

先看会删什么:

bash
python .\clear_antigravity_provider_auth.py

确认没问题再正式删:

bash
python .\clear_antigravity_provider_auth.py --apply

顺手把本地开发环境也一起清掉:

bash
python .\clear_antigravity_provider_auth.py --apply \
  --provider "https://<your-domain>/" \
  --provider "http://localhost:3000/" \
  --provider "http://127.0.0.1:3000/"

总结

Antigravity OAuth 排障最大的坑,在于会让人误以为本地状态已经清空。

以后再遇到重启依然无效、旧 ID 阴魂不散、授权缺失 scope 等诡异现象,记住核心结论:

别盯着旧 client_id,直接按 provider 清理本地认证状态。

END

相关文章

暂无相关文章

© 2026 阿旷. All Rights Reserved. / RSS / Sitemap
Powered by Tanstack Start & Flare Stack Blog