8bit-threshold-computer

A Turing-complete 8-bit CPU implemented entirely as threshold logic gates.

Every logic gate is a threshold neuron: output = 1 if (Σ wᵢxᵢ + b) ≥ 0 else 0

Tensors:    3,122
Parameters: 5,648

What Is This?

A complete 8-bit processor where every operation—from Boolean logic to arithmetic to control flow—is implemented using only weighted sums and step functions. No traditional gates.

Component Specification
Registers 4 × 8-bit general purpose
Memory 256 bytes addressable
ALU 16 operations (ADD, SUB, AND, OR, XOR, NOT, SHL, SHR, INC, DEC, CMP, NEG, PASS, ZERO, ONES, NOP)
Flags Zero, Negative, Carry, Overflow
Control JMP, JZ, JNZ, JC, JNC, JN, JP, JV, JNV, CALL, RET, PUSH, POP

Turing complete. Verified with loops, conditionals, recursion, and self-modification.


Background

Threshold Logic

A threshold gate computes a Boolean function by taking a weighted sum of binary inputs and comparing it to a threshold. If the sum meets or exceeds the threshold, the output is 1; otherwise, 0. This can be expressed as a neuron with Heaviside step activation: output = 1 if (Σ wᵢxᵢ + b) ≥ 0 else 0, where weights wᵢ and bias b are integers.

Threshold gates are strictly more powerful than standard Boolean gates. A single threshold gate can compute any linearly separable Boolean function—this includes AND, OR, NAND, NOR, and many others that require multiple levels of conventional gates. Functions that are not linearly separable (such as XOR or parity) require multiple threshold gates arranged in layers.

Historical Development

Warren McCulloch and Walter Pitts introduced the threshold neuron model in 1943, proving that networks of such neurons could compute any Boolean function. This work preceded both the perceptron and modern neural networks, establishing the theoretical foundation for neural computation.

The 1960s saw significant development in threshold logic synthesis. Researchers including Saburo Muroga, Robert McNaughton, and Michael Dertouzos developed algebraic methods for determining whether a Boolean function could be implemented as a single threshold gate, and if so, how to calculate appropriate weights. This work produced systematic techniques for threshold gate design but focused on individual gates rather than complete systems.

Frank Rosenblatt's Mark I Perceptron (1957-1960) implemented threshold neurons in hardware using potentiometers for weights, but it was a pattern classifier that learned its weights through training—the final weight configurations were not published. Bernard Widrow's ADALINE and MADALINE systems (1960-1963) similarly used adaptive threshold elements with weights learned via the LMS algorithm.

Hava Siegelmann and Eduardo Sontag proved in the 1990s that recurrent neural networks are Turing complete. Their construction, however, relied on continuous sigmoid activation functions with infinite precision—not the discrete step function used in threshold logic. Other theoretical work on neural Turing machines and differentiable computers followed similar patterns: proving computational universality using continuous, differentiable activations suitable for gradient-based training.

Neuromorphic Hardware

Modern neuromorphic processors implement large arrays of configurable threshold-like neurons in silicon:

Intel Loihi (2017) provides 128 neuromorphic cores with programmable synaptic weights, spike-based communication, and on-chip learning. The architecture supports integer weights and configurable neuron dynamics.

IBM TrueNorth (2014) integrates one million neurons and 256 million synapses in a 4096-core array. Each neurosynaptic core implements 256 neurons with configurable weights and thresholds. The chip was designed as an alternative to von Neumann architecture rather than an implementation of one.

BrainChip Akida (2021) targets edge deployment with event-based processing and integer weights. The architecture supports standard neural network operations mapped onto neuromorphic primitives.

SpiNNaker (University of Manchester) uses ARM processor cores to simulate spiking neural networks at scale. The platform has hosted various neural models but is simulation-based rather than native neuromorphic silicon.

Despite the availability of these platforms, published work has focused on neural network inference, sensory processing, and pattern recognition. A 2024 paper demonstrated basic logic gates, adders, and decoders on SpiNNaker and Dynap-SE1, describing this as "a first step toward the construction of a spiking computer"—the implementation lacked instruction fetch, program counter, memory systems, and control logic.

This Implementation

The weights in this repository implement a complete 8-bit computer: registers, ALU with 16 operations, status flags, conditional branching, subroutine calls, stack operations, and memory access. Every component is built from threshold neurons with integer weights. The weight configurations are published in safetensors format for direct loading and deployment.


Circuit Categories

Category Circuits Examples
Boolean 9 AND, OR, NOT, NAND, NOR, XOR, XNOR, IMPLIES, BIIMPLIES
Arithmetic 18 Half/full adder, 2/4/8-bit ripple carry, comparators
ALU 3 8-bit ALU, control decoder, flag computation
Combinational 10 MUX (2:1, 4:1, 8:1), DEMUX, encoders, decoders
Control Flow 16 JMP, conditional jumps, CALL, RET, PUSH, POP
Error Detection 11 Parity (XOR tree), checksum, CRC, Hamming
Modular 11 Divisibility by 2-12 (multi-layer for non-powers-of-2)
Threshold 13 k-of-n gates, majority, minority, exactly-k
Pattern 10 Popcount, leading/trailing ones, symmetry

Usage

import torch
from safetensors.torch import load_file

tensors = load_file("neural_computer.safetensors")

def heaviside(x):
    return (x >= 0).float()

# AND gate: fires when both inputs are 1
w = tensors['boolean.and.weight']  # [1, 1]
b = tensors['boolean.and.bias']    # [-2]

for a, b_in in [(0,0), (0,1), (1,0), (1,1)]:
    inp = torch.tensor([a, b_in], dtype=torch.float32)
    out = heaviside(inp @ w + b)
    print(f"AND({a}, {b_in}) = {int(out.item())}")

Verification

The model includes iron_eval.py which exhaustively tests all circuits:

python iron_eval.py
# Output: Fitness: 1.000000

Verification Status

Category Status Notes
Boolean gates Exhaustively tested Coq proofs available
Arithmetic Exhaustively tested Coq proofs available
ALU Exhaustively tested Coq proofs available
Control flow Exhaustively tested Coq proofs available
Threshold Exhaustively tested Coq proofs available
Modular (mod 3,5,6,7,9,10,11,12) Exhaustively tested Multi-layer, hand-constructed
Parity Exhaustively tested XOR tree, hand-constructed
Modular (mod 2,4,8) Exhaustively tested Single-layer, trivial

The modular arithmetic circuits for non-powers-of-2 and the parity circuits were hand-constructed because:

  • Divisibility by 3, 5, etc. is not linearly separable in binary
  • 8-bit parity (XOR of all bits) requires a tree of XOR gates

All circuits pass exhaustive testing over their full input domains.


Tensor Naming Convention

{category}.{circuit}[.{layer}][.{component}].{weight|bias}

Examples:
  boolean.and.weight
  boolean.xor.layer1.neuron1.weight
  arithmetic.ripplecarry8bit.fa7.ha2.sum.layer1.or.weight
  modular.mod5.layer2.eq3.weight
  error_detection.paritychecker8bit.stage2.xor1.layer1.nand.bias

Hardware Compatibility

All weights are integers. All activations are Heaviside step. Designed for:

  • Intel Loihi — Neuromorphic research chip
  • IBM TrueNorth — 1M neuron chip
  • BrainChip Akida — Edge neuromorphic processor

Files

File Description
neural_computer.safetensors 3,122 tensors, 5,648 parameters
iron_eval.py Comprehensive test suite
prune_weights.py Weight optimization tool

Citation

@misc{8bit-threshold-computer,
  title={8bit-threshold-computer: A Turing-Complete Threshold Logic CPU},
  author={Norton, Charles},
  year={2025},
  howpublished={Hugging Face},
  url={https://huggingface.co/phanerozoic/8bit-threshold-computer}
}

License

MIT


Links

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for phanerozoic/8bit-threshold-computer

Finetunes
1 model