Spraxxx proof IP address + port = addressable entrance

Published 2026-04-30T04:28:41Z UTC by Jacques / SPRAXXX

Yes. This crossed a real checkpoint.

Not “maybe.” Not theory. Not diagram. It ran on AngryWU and returned proof.

What happened is this:

We started with the normal web idea: a domain points to a server, and the server answers mostly on ports 80 and 443. That is the usual public web pattern.

Then we stripped it down to the lower truth:

IP address + port = addressable entrance

The node already had a public IP:

67.217.243.136

Then NGINX was made to bind directly to unusual low ports:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10

That proved the first layer:

67.217.243.136:1 can be a real server entrance. 67.217.243.136:2 can be a real server entrance. Same through :10.

Then we added Python behind it.

That changed it from “NGINX can answer” into “NGINX can identify which visible port was entered, forward the request to Python, and Python can report/log/decide based on that port.”

That is the second layer:

IP:PORT → NGINX → Python

Then we encrypted it.

That changed it again:

https://67.217.243.136:PORT → NGINX TLS → Python

And the node proved it with real responses:

"protocol": "https" "status": "python_behind_nginx_tls" "visible_port": "1" through "10" "host": "67.217.243.136:1" through "67.217.243.136:10"

That means the port was not just open. It was recognized, encrypted, routed, processed, logged, and reported.

That is why this is bigger than proof of concept. A proof of concept says, “This should work.” What happened here says, “This did work, on the live node, with receipts.”

The actual proof artifacts are:

/etc/nginx/conf.d/SPRAXXX-visible-https.conf

That is the live NGINX config holding the visible HTTPS port field.

/usr/local/bin/encrypt-visible-ports

That is the reusable tool that can install/test/status/off the encrypted visible port field.

127.0.0.1:9010

That is the Python backend sitting behind the visible HTTPS layer.

/srv/spraxxx/work/SPRAXXX-visible-clean-final-20260430T042350Z.txt

That is the final receipt.

Hash:

bf875b4260127a055bb2ee8ba9c5c47451b3f2e9702756aa2685e62f125019cc

That hash turns the run into a custody artifact.

The operating discovery is:

A certificate is not tied to port 443. A certificate is tied to the hostname/key material. NGINX can use the same certificate on any TCP port it can bind. Certbot does not have to be involved per port. Certbot only issues/renews the certificate. NGINX decides where to apply it.

So this line is true:

One certificate source can support many encrypted visible entrances.

The next discovery is:

A port is not “web,” “mail,” “VPN,” or “app” by nature. A port becomes whatever listener owns it.

The formula is:

IP + PORT + LISTENER = SERVICE

Then the upgraded SPRAXXX formula is:

IP + PORT + NGINX TLS + PYTHON = encrypted programmable operating slot

That is the checkpoint.

The redundancy savings are serious because normally people multiply everything:

new subdomain, new config, new route, new cert assumptions, new app port, new reverse proxy mapping, new documentation.

Here, the port number itself becomes the lane marker.

Instead of needing:

service1.domain.com service2.domain.com service3.domain.com

You proved:

67.217.243.136:1 67.217.243.136:2 67.217.243.136:3

Each one can report itself. Each one can be assigned meaning. Each one can be logged. Each one can later become a tunnel slot, intake slot, payment slot, report slot, phone slot, pantry slot, evidence slot, operator slot, or whatever the backend logic defines.

This is the difference between “website routing” and “operating field routing.”

A website thinks in pages.

A node thinks in ports, processes, sockets, logs, and receipts.

What we built is closer to a port-addressed control plane.

The important limits are also clear:

Not every port should be touched. Some are already owned: SSH, DNS, mail, VPN, standard HTTPS, phone stack, internal app backends.

Browsers and carriers may block or warn on strange low ports.

Bare IP HTTPS will not browser-validate cleanly unless the certificate covers the IP. Curl with -k proves the encrypted server operation, but browser-trusted HTTPS should use the hostname covered by the cert.

Opening all ports blindly is not the win. The win is having a repeatable tool that can assign, test, report, and remove ranges under control.

So the real “proof of actually” statement is:

On AngryWU, SPRAXXX proved that visible TCP ports can be converted into encrypted programmable service slots. NGINX binds the visible port, applies TLS, forwards to one Python backend, and Python returns a verified report naming the exact port, protocol, host, path, and timestamp. The run was tested across ports 1–10 and preserved with a SHA256 receipt.

That is not a sketch. That happened.

The clean name for the checkpoint:

Visible HTTPS Port Field — Proof of Actually

Or, more technical:

SPRAXXX Port-Addressed TLS Control Plane v0.1

The next disciplined move is to stabilize it, not explode it.

Next build should be:

encrypt-visible-ports report 1 10

Then:

encrypt-visible-ports install 1 100

Then produce:

PORT | STATE | PROTOCOL | BACKEND | STATUS | RECEIPT

Once that works, scale by blocks. Not because the idea is weak, but because the node deserves clean custody.

Back to journal