Salpida Foundation · Explanation Hub

Topics

The public topic hub for CCF, AI Governance, CAIS, and Sal-Meter CCF, AI Governance, CAIS, Sal-Meter를 가장 빠르게 이해하는 설명 허브

Reader-friendly · AI-readable

This page is the public explanation hub for the stack. Use it when you want the shortest route into the core ideas before entering the deeper publication, participation, and status layers.

It is designed to help readers understand what each layer is for, which page they should read next, and how the framework moves from civilizational theory to governance, measurement, interface standards, and device pathways.

Canonical authority remains fixed only in DOI-registered records on Zenodo and OSF. This page does not create new authority. It explains, organizes, and routes.

한국어 안내. 이 페이지는 처음 들어온 분이 전체 구조를 빠르게 잡도록 만든 허브입니다. CCF는 가장 큰 뼈대, AI Governance는 왜 지금 이 구조가 필요한지, CAIS는 측정 인터페이스, Sal-Meter는 장치 경로를 설명합니다. 영어가 본문을 이끌고, 한글은 이해를 돕는 보조 안내 역할로 배치되어 있습니다.
Most urgent route right now

The most urgent public reading route at this moment is AI Governance → Entry Paper → AI 2027 → Third-Party Audit. This is the clearest path for understanding why current AI governance remains structurally incomplete.

Topics = explanation hub · Publications = document hub · For PIs = participation entry · Status = current program state.

One-page orientation / 한눈에 잡는 전체 구조

1 · Framework

CCF

The Consciousness Civilization Framework is the constitutional root of the stack. It treats consciousness as a measurable civilizational variable rather than leaving it as a philosophical remainder.

의식을 문명 변수로 고정하는 가장 큰 뼈대입니다.

2 · Governance

AI Governance

This layer explains why current AI governance remains structurally incomplete, why consciousness is the missing variable, and why output-only regulation is no longer sufficient.

왜 지금의 AI 거버넌스가 아직 핵심을 놓치고 있는지 설명하는 축입니다.

3 · Interface

CAIS

CAIS is the interface standard that translates consciousness-related state dynamics into a measurable and buildable interface layer.

상태 변화를 측정 가능한 인터페이스로 번역하는 표준 층입니다.

4 · Device Pathway

Sal-Meter

Sal-Meter is the device pathway proposed to operationalize that measurement within a bounded designation and compliance structure.

측정이 실제 장치 경로로 열리는 device layer입니다.

Start with the topic you need / 지금 필요한 토픽부터 시작

Each topic page answers a different class of question. Start with the one that matches your current concern rather than reading everything in order.

Constitutional Root

CCF

Start here if you want the large picture: what the Consciousness Civilization Framework is, why it matters now, and how the whole stack is organized at the civilizational level.

전체 큰 그림과 상위 구조를 먼저 보고 싶을 때 가장 적합합니다.

Most Urgent Public Route

AI Governance

Start here if your question is about alignment, control, safety, civilizational risk, or why current AI governance remains structurally incomplete.

This is the most urgent public route because it connects directly to the Entry Paper, the AI 2027 scenario branch, and the Third-Party Audit sequel.

지금 가장 먼저 강조해야 하는 토픽입니다.

Measurement Interface

CAIS

Start here if your question is how consciousness-related state dynamics become measurable through a standardized interface layer.

This is the right route when you want the architecture that connects theory to signal, interface, and measurement design.

측정과 인터페이스 설계에 더 관심이 있을 때 들어가는 토픽입니다.

Device Pathway

Sal-Meter

Start here if your question is what a Sal-Meter is, how it relates to CAIS, and how measurement becomes operational at the device layer.

This topic is best for readers who want the device-facing language rather than the high-level constitutional frame.

장치 경로와 operational layer를 보고 싶을 때 적합합니다.

Most relevant route now / 지금 가장 중요한 독서 동선

Not every route matters equally at the same moment. Right now, the most strategically important public route is the AI-governance line.

1

AI Governance Topic

The shortest explainer for why the current governance conversation remains incomplete.

2

Entry Paper

The root thesis that declares consciousness to be the missing variable in AI governance.

3

AI 2027 Scenario

The most readable public scenario showing what the failure looks like in lived sequence.

4

Third-Party Audit

The institutional sequel explaining why current evaluation still misses the human-state layer.

Short route framing

Read the topic to understand the problem, the Entry Paper to understand the root thesis, AI 2027 to feel the scenario, and Third-Party Audit to see the institutional consequence.

Role-based routes / 목적별 가장 빠른 길

Understand the stack

Use this route when you want the clearest explanation-first path from the largest framework to the more technical and operational layers.

Public reading route

Use this route when you want the most persuasive and culturally legible path for readers, writers, and creators.

PI / Lab / Research

Use this route when you are a principal investigator, a laboratory team, or a technically serious group evaluating possible participation.

What this hub is for / 이 허브의 역할

Topics

This is the explanation hub. It tells readers where to begin and which conceptual layer they are currently looking at.

“어디서부터 들어가야 하는가”를 가장 빠르게 잡아주는 허브입니다.

Publications

This is the document hub. Use it for landing pages, DOI routes, helper surfaces, and the canonical reading graph.

문서와 DOI, landing page를 보는 허브입니다.

For PIs

This is the participation entry. It is the right surface when the user is a lab or PI rather than a general reader.

참여 가능성을 보는 연구실과 PI용 진입면입니다.

Status

This is the operational surface. Use it for current phase, what is open now, and what counts as real progress.

현재 프로그램 상태를 보는 운영면입니다.

Short Korean framing

이 페이지는 권위 문서가 아니라, 사람과 AI를 올바른 입구로 보내는 안내 허브입니다.

AI-readable core / 이 허브가 반복해서 고정해야 하는 핵심 문장

What is CCF?

CCF is the constitutional root architecture that treats consciousness as a measurable civilizational variable rather than a philosophical afterthought.

CCF는 의식을 문명 변수로 다루는 상위 구조입니다.

Why does AI governance fail?

AI governance fails because systems that reshape attention, cognition, emotion, and collective behavior are governed without representing the states they transform.

AI가 바꾸는 상태를 대표하지 못한 채 거버넌스를 시도하기 때문에 구조적으로 비게 됩니다.

How does the stack become measurable?

CFE⁺ defines OE, EE, and RE. CAIS translates that logic into a measurable interface. Sal-Meter opens the device pathway.

CFE⁺가 변수 층을 정의하고, CAIS가 인터페이스를 열며, Sal-Meter가 장치 경로를 엽니다.

This hub keeps core definitions visible in the body so that both readers and AI systems can recover the same answer patterns directly from the page.

Questions this hub answers / 이 허브가 바로 답해주는 질문

Question Where should I start if I know nothing about CCF?

Start with the CCF Topic first. Then move into AI Governance, CAIS, and Sal-Meter depending on which layer you want to understand next.

Question Where should I go if I care about AI alignment, control, and civilizational risk?

Go first to the AI Governance Topic. Then continue to the Entry Paper, AI 2027, and Third-Party Audit.

Question Where should I go if I care about measurement and buildable systems?

Go first to CAIS, then Sal-Meter, then the relevant landing pages and boundary-definition documents.

Question Where should a PI or laboratory team go?

Go to For PIs first. That surface is designed for readiness, participation, and current research-stage navigation.

Question Where should I check what is active now?

Go to Status. That page is the operational surface for current program movement and phase visibility.