I am behind a NAT, but I have a single RPi 3 running on a public IP. Now
I found that to use a specific application properly on my Windows laptop
I would need to have it on a public IP occasionally, so some clients
could connect to it.
Is there a way to make some sort of port forwarding from the Raspberry
(which has both a publich eth0 IP and a private wlan0 IP) to my Windows machine via an app on the Raspberry? I tried with miniupnpc, but it
demands a router that can be set to forward ports, and my ISP doesn't
like that.
I am behind a NAT, but I have a single RPi 3 running on a public IP. Now
I found that to use a specific application properly on my Windows laptop
I would need to have it on a public IP occasionally, so some clients
could connect to it.
Is there a way to make some sort of port forwarding from the Raspberry
(which has both a publich eth0 IP and a private wlan0 IP) to my Windows machine via an app on the Raspberry? I tried with miniupnpc, but it
demands a router that can be set to forward ports, and my ISP doesn't
like that.
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
On Tue, 6 Feb 2024 00:51:21 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
There are free DNS services that support dynamic DNS (eg he.net),
and most domain registrars provide DNS with the domain including dynamic entries.
On 2/6/24 3:38 AM, Ahem A Rivet's Shot wrote:
On Tue, 6 Feb 2024 00:51:21 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
There are free DNS services that support dynamic DNS (eg
he.net), and most domain registrars provide DNS with the domain
including dynamic entries.
All the "big" services quit offering free versions
a long time ago. DynDNS was good. But now ... $$$
Obscure services, um ... don't want them in my boxes
and it was funner to kinda write my own.
Obscure services, um ... don't want them in my boxes
and it was funner to kinda write my own.
On Wed, 7 Feb 2024 01:39:07 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
On 2/6/24 3:38 AM, Ahem A Rivet's Shot wrote:
On Tue, 6 Feb 2024 00:51:21 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
There are free DNS services that support dynamic DNS (eg
he.net), and most domain registrars provide DNS with the domain
including dynamic entries.
All the "big" services quit offering free versions
a long time ago. DynDNS was good. But now ... $$$
no-ip is still around and still free, joined these days by dynu, afraid.org, duckdns and clouddns all of which provide much the same service as dyndns used to. If you have a domain registered then there's the
registrar and he.net.
Obscure services, um ... don't want them in my boxes
The only thing that goes in your box is the daemon that registers changes of IP address, many of them publish the details so you can write
your own.
and it was funner to kinda write my own.
Fair enough that's always a good reason.
/tmp/afraid_dns.log 2>& 1
On Wed, 7 Feb 2024 01:39:07 -0500 "68g.1499" <68g.1499@etr6.net> wrote:
On 2/6/24 3:38 AM, Ahem A Rivet's Shot wrote:
On Tue, 6 Feb 2024 00:51:21 -0500 "68g.1499" <68g.1499@etr6.net> wrote:
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
There are free DNS services that support dynamic DNS (eg he.net), and most domain registrars provide DNS with the domain including dynamic entries.
All the "big" services quit offering free versions
a long time ago. DynDNS was good. But now ... $$$
no-ip is still around and still free
On 07/02/2024 07:19, Ahem A Rivet's Shot wrote:
On Wed, 7 Feb 2024 01:39:07 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
On 2/6/24 3:38 AM, Ahem A Rivet's Shot wrote:
On Tue, 6 Feb 2024 00:51:21 -0500
"68g.1499" <68g.1499@etr6.net> wrote:
To deal with dynamic IPs ... first off there are $$$
services. Basically they monitor your real IP, then
update DNS servers to match per change.
There are free DNS services that support dynamic DNS (eg
he.net), and most domain registrars provide DNS with the domain
including dynamic entries.
All the "big" services quit offering free versions
a long time ago. DynDNS was good. But now ... $$$
no-ip is still around and still free, joined these days by dynu,
afraid.org, duckdns and clouddns all of which provide much the same
service
as dyndns used to. If you have a domain registered then there's the
registrar and he.net.
Obscure services, um ... don't want them in my boxes
The only thing that goes in your box is the daemon that registers
changes of IP address, many of them publish the details so you can write
your own.
and it was funner to kinda write my own.
Fair enough that's always a good reason.
afraid.org works very well for $0
I have a script called by cron every 15mins
#! /bin/bash
wget -O - http://freedns.afraid.org/dynamic/update.php?<some_magic_UID>
/tmp/afraid_dns.log 2>& 1
I have a script called by cron every 15mins
#! /bin/bash
wget -O -
http://freedns.afraid.org/dynamic/update.php?<some_magic_UID> >>
/tmp/afraid_dns.log 2>& 1
DO gloss over the source code, just in case :-)
On 2024-02-09 10:14, 68g.1502 wrote:
I have a script called by cron every 15mins
#! /bin/bash
wget -O -
http://freedns.afraid.org/dynamic/update.php?<some_magic_UID> >>
/tmp/afraid_dns.log 2>& 1
DO gloss over the source code, just in case :-)
Gloss over wget's source code?
Because that is the only one mentioned here. no daemons, just plain
wget. And I got an example with curl. Installed by the OS
On 09/02/2024 09:30, Björn Lundin wrote:
On 2024-02-09 10:14, 68g.1502 wrote:
I have a script called by cron every 15mins
#! /bin/bash
wget -O -
http://freedns.afraid.org/dynamic/update.php?<some_magic_UID> >>
/tmp/afraid_dns.log 2>& 1
DO gloss over the source code, just in case :-)
Gloss over wget's source code?
Because that is the only one mentioned here. no daemons, just plain
wget. And I got an example with curl. Installed by the OS
I blocked the nym changing troll sometime back. At first I thought it
was someone's AI experiment. But it's too full of shit and wrong so much
of the time that even AI isn't that dumb.
Wget and the daemons for dynDNS and friends are very
different things BTW.
every dynamic dns service I know of updates in essentially the same
way, you make an http(s) request with the domain, an authentication
key and optionally the IP address
Ahem A Rivet's Shot wrote:
every dynamic dns service I know of updates in essentially the same
way, you make an http(s) request with the domain, an authentication
key and optionally the IP address
And then there are DNS providers which accept RFC1996 compliant NOTIFY transactions, rather than rolling their own ...
On 2024-02-10 02:41, 68g.1503 wrote:
And hmmmm ... when IS the last time anyone actually
DID look-over wget's source code ??? The best place
to hide evil is inside something deemed "old and
reliable" .....
Are you saying you are looking into EVERY package's source code you
download via apt BEFORE you install it?
Or are you saying that one should do it?
Sounds bit hysterical to me.
You do know that both wget and curl are provided by debian based (and
most other ) distribution via their repository tool ?
If you don't trust your distribution, you should switch, or roll your
own distribution.
But then - I guess you have lots of work ahead of you verifying/looking
over source code ...
And hmmmm ... when IS the last time anyone actually
DID look-over wget's source code ??? The best place
to hide evil is inside something deemed "old and
reliable" .....
Not hard to build a simple wget clone using libcurl
On 10/02/2024 10:47, Björn Lundin wrote:
On 2024-02-10 02:41, 68g.1503 wrote:Not hard to build a simple wget clone using libcurl
And hmmmm ... when IS the last time anyone actually
DID look-over wget's source code ??? The best place
to hide evil is inside something deemed "old and
reliable" .....
Are you saying you are looking into EVERY package's source code you
download via apt BEFORE you install it?
Or are you saying that one should do it?
Sounds bit hysterical to me.
You do know that both wget and curl are provided by debian based (and
most other ) distribution via their repository tool ?
If you don't trust your distribution, you should switch, or roll your
own distribution.
But then - I guess you have lots of work ahead of you
verifying/looking over source code ...
Of course not - but I was referring to that if you don't trust the
binaries from your distribution, then you'd need to check ALL source code. Not only wget/curl - but everything from kernel all the way to web
browsers
On Sat, 10 Feb 2024 19:37:04 +0100
Björn Lundin <bnl@nowhere.com> wrote:
Of course not - but I was referring to that if you don't trust the
binaries from your distribution, then you'd need to check ALL source code. >> Not only wget/curl - but everything from kernel all the way to web
browsers
Do all that and you are *still* open to Ken Thompson's attack via a poisoned compiler.
On 2024-02-10 20:12, Ahem A Rivet's Shot wrote:
On Sat, 10 Feb 2024 19:37:04 +0100
Björn Lundin <bnl@nowhere.com> wrote:
Of course not - but I was referring to that if you don't trust the
binaries from your distribution, then you'd need to check ALL source
code. Not only wget/curl - but everything from kernel all the way to
web browsers
Do all that and you are *still* open to Ken Thompson's attack
via a poisoned compiler.
Well, verifying gcc sources could be included in the above - 'between
kernel and web browser'. At least if you are installing compilers
On Sat, 10 Feb 2024 23:13:20 +0100
Björn Lundin <bnl@nowhere.com> wrote:
On 2024-02-10 20:12, Ahem A Rivet's Shot wrote:
On Sat, 10 Feb 2024 19:37:04 +0100
Björn Lundin <bnl@nowhere.com> wrote:
Of course not - but I was referring to that if you don't trust the
binaries from your distribution, then you'd need to check ALL source
code. Not only wget/curl - but everything from kernel all the way to
web browsers
Do all that and you are *still* open to Ken Thompson's attack
via a poisoned compiler.
Well, verifying gcc sources could be included in the above - 'between
kernel and web browser'. At least if you are installing compilers
The point of Ken Thomson's attack is that you have to compile those
gcc sources and that compiler can poison the binary you produce from the clean gcc sources. So inspecting sources doesn't help you.
On Sun, 11 Feb 2024 08:58:50 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 11/02/2024 05:17, Ahem A Rivet's Shot wrote:
On Sat, 10 Feb 2024 23:13:20 +0100
The point of Ken Thomson's attack is that you have to compileObviously one must write one's own compiler!
those gcc sources and that compiler can poison the binary you produce
from the clean gcc sources. So inspecting sources doesn't help you.
So what do you compile it with ?
your clean room compiler is poisoned then so will be the compiled compiler despite your clean room code. That's the Thompson trap.
The only way out of the Thompson trap is to write a new compiler
from scratch in assembler and assemble it by hand. Then you just have to trust the hardware.
On 11/02/2024 05:17, Ahem A Rivet's Shot wrote:
On Sat, 10 Feb 2024 23:13:20 +0100
The point of Ken Thomson's attack is that you have to compile
those gcc sources and that compiler can poison the binary you produce
from the clean gcc sources. So inspecting sources doesn't help you.
Obviously one must write one's own compiler!
The point of Ken Thomson's attack is that you have to compile those
gcc sources and that compiler can poison the binary you produce from the clean gcc sources. So inspecting sources doesn't help you.
On 2024-02-11 06:17, Ahem A Rivet's Shot wrote:
The point of Ken Thomson's attack is that you have to compile
those gcc sources and that compiler can poison the binary you produce
from the clean gcc sources. So inspecting sources doesn't help you.
Ah, oh, didn't know that.
So design your own chip!
The ARM is a special CPU that was designed initially to beat the 6502
and walk all over z80s and 8080s.
On Sun, 11 Feb 2024 10:02:46 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
So design your own chip!
Do you trust the chip layout software not to embed a backdoor or something ? What do you run that chip layout software on and why do you
trust that system. You'd best start from MSI TTL/CMOS logic and build your own system to run the chip design software (that you write or at least
audit) to design the chips.
The ARM is a special CPU that was designed initially to beat the 6502
and walk all over z80s and 8080s.
I know - I was in Cambridge and in the business when it was being
done. I knew about the ARM before it was released, they were pretty good at keeping it out of the rumour mill but nothing is completely secret in Cambridge. Earliest rumours had Andy Hopper involved.
The modern ARMv8 architecture bears little resemblance to the
original ARM used in the Archimedes, it has become massively complex. Even
so for the time the performance of the original ARM was stunning, matched only by the Transputer which was weird and expensive. Once thought to be
the future of computing the Transputer is all but forgotten, while ARMv8 has become the dominant 64 bit architecture (measured in numbers of CPUs manufactured).
On Sun, 11 Feb 2024 10:02:46 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
So design your own chip!
Do you trust the chip layout software not to embed a backdoor or something ? What do you run that chip layout software on and why do you
trust that system. You'd best start from MSI TTL/CMOS logic and build your own system to run the chip design software (that you write or at least
audit) to design the chips.
The ARM is a special CPU that was designed initially to beat the 6502
and walk all over z80s and 8080s.
I know - I was in Cambridge and in the business when it was being
done. I knew about the ARM before it was released, they were pretty good at keeping it out of the rumour mill but nothing is completely secret in Cambridge. Earliest rumours had Andy Hopper involved.
The modern ARMv8 architecture bears little resemblance to the
original ARM used in the Archimedes, it has become massively complex.
I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced
90deg apart... for those that know ;-)
On Sun, 11 Feb 2024 14:53:16 +0000
mm0fmf <none@invalid.com> wrote:
I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced
90deg apart... for those that know ;-)
When I first read about the Transputer I wanted to hook 16 of them
up into a hypercube.
Luckily ARM doesn't have a management engine - yet!
I think that its pretty difficult to encode an invisible backdoor in the silicon and not have it spotted at some fairly early stage.
So many of these 'threat narratives' are, when examined closely,
implausible to the point of downright impossibility.
You can examine the machine code that your compiler and linker
assembles. And people do. I certainly have done. If it doesn't match
what you asked for in the high level language, there are questions to be answered.
druck <news@druck.org.uk> wrote:
Luckily ARM doesn't have a management engine - yet!
Arm doesn't have a management engine, because Arm (mostly) don't make chips. That's up to Qualcomm, Samsung or whoever. You don't get a full datasheet for what's in one of those.
In the case of the original Pi, the Arm *is* the management engine. It was used for managing the GPU, which was the main function of the chip originally.
(well sorta, the original Broadcom chips didn't have an Arm in them)
Theo
On 11/02/2024 14:52, The Natural Philosopher wrote:
I think that its pretty difficult to encode an invisible backdoor in
the silicon and not have it spotted at some fairly early stage.
Then don't hide it, have it there in plain sight - like the Intel
Management Engine, and the AMD equivalent.
So many of these 'threat narratives' are, when examined closely,
implausible to the point of downright impossibility.
The more you examine details of the IME that we know about, the more
worrying it gets. It's a CPU within CPU running closed software with
higher privilege than main CPU, able to access all memory, any hardware
and create its own network connections.
You can examine the machine code that your compiler and linker
assembles. And people do. I certainly have done. If it doesn't match
what you asked for in the high level language, there are questions to
be answered.
You can look at the assembler of the main CPU as much as you like, but
you've no idea what is running on the IME.
Luckily ARM doesn't have a management engine - yet!
---druck
On 12/02/2024 22:17, Theo wrote:
druck <news@druck.org.uk> wrote:
Luckily ARM doesn't have a management engine - yet!
Arm doesn't have a management engine, because Arm (mostly) don't make chips.
That's up to Qualcomm, Samsung or whoever. You don't get a full datasheet for what's in one of those.
In the case of the original Pi, the Arm *is* the management engine. It was used for managing the GPU, which was the main function of the chip originally.
(well sorta, the original Broadcom chips didn't have an Arm in them)
Theo
Tell me more. This is a corner of history I am only vaguely familiar
with., Wasn't the original chip a failed set top box chip? Which is why
it always had HDMI.....
Indeed. CISC processors running microcode are definitely in the 'secret >software' class.
Which is the nice thing about ARM. Keep it simple and run it blazingly
fast. Although my friend who worked on the first chip at Acorn says it
is massively more complex today than the original incarnation.
An assemblers is - or ought to be - a 1:1 translator from human readable
to machine readable commands.
If the compiler you use to compile
your clean room compiler is poisoned then so will be the compiled compiler >> despite your clean room code. That's the Thompson trap.
The ARM is a special CPU that was designed initially to beat the 6502
and walk all over z80s and 8080s.
Because they couldn't afford massive wafers, it was strictly limited in hardware. All they could do was a very basic instruction set and a lot
of on-chip registers. And a three instruction set pipeline and clock it
as fast as it would go. And a 32 bit address bus. To take advantage of
a lot more RAM that was getting cheaper by the day. The low power was
simply a cost saving measure - a plastic cased low dissipation chip was *cheaper*.
And a few - maybe only one - very bright boys (sophie wilson) , looked
at the absolute minimum of what those instructions had to do.
The primary design objectives were a low per-unit cost (not design
cost as sometimes stated) and a minimum of glue logic between major subsystems. I recall seeing a "triangle" diagram with the corners
cut off, the centre of the triangle was the CPU, the corners were
memory controller, graphics, and peripheral bus.
You're correct to identify a plastic package as a design criteria,
from memory the target was £2/chip which implied that over a ceramic
one. None of the group had any chip design experience, they knew
a plastic package meant no more than a 1-2W power dissipation, but
had no idea what that meant in terms of design. Thus they optimised
for power at every opportunity and undercut the target by orders
of magnitude.
The other dimension to lowering the cost of the package was reducing
pin out to the bare minimum, hence the 24 bit (not 32 bit) address
bus. Size of the wafer was an irrelevance since they never baked
their own chips, die size yes they wanted to keep small to lower
cost but not an over-riding consideration - it wasn't that much
smaller than many other designs of the period.
This is from my lecture notes and also a couple of pints while at
Uni 25 years ago. The lecturer for hardware design was none other
than Steve Furber who co-designed and literally wrote the book on
the thing.
That's about right - ARM1/ARM2 was designed specifically for the Archimedes, and various design decisions that remain in Aarch32 are because of specific constraints on that platform. For example ARM2 had no cache and was
designed to make best use of FPM DRAM. Every instruction took two cycles except some where sequential memory accesses could be completed in a single cycle - hence LDM/STM instructions.
And hmmmm ... when IS the last time anyone actually
DID look-over wget's source code ??? The best place
to hide evil is inside something deemed "old and
reliable" .....
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 371 |
Nodes: | 16 (2 / 14) |
Uptime: | 174:59:30 |
Calls: | 7,915 |
Files: | 12,983 |
Messages: | 5,797,724 |