lines. There was even COBOL for the 1401 which is pretty amazing considering how tiny
a 1401 was and it ran in 4000 characters of core. You needed tapes or a disk but even so.
I still have a copy of the Nevada COBOL compiler for the Commodore 64.
A C64 had 64K 8-bit bytes of RAM,
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
1MB floppies arrived later (in the days of 3˝" floppies, IIRC).
A C64 had 64K 8-bit bytes of RAM,
Indeed (tho, IIRC 8kB of those were hidden by the ROM, tho you could
change the mapping to hide different 8kB at different time and thus
access the full 64kB of RAM).
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
IIRC they held only ~170kB.
A C64 had 64K 8-bit bytes of RAM,
Indeed (tho, IIRC 8kB of those were hidden by the ROM, tho you could
change the mapping to hide different 8kB at different time and thus
access the full 64kB of RAM).
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
IIRC they held only ~170kB.
A C64 had 64K 8-bit bytes of RAM,
Indeed (tho, IIRC 8kB of those were hidden by the ROM, tho you could
change the mapping to hide different 8kB at different time and thus
access the full 64kB of RAM).
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
IIRC they held only ~170kB.
1MB floppies arrived later (in the days of 3½" floppies, IIRC).
Stefan
On Fri, 30 Aug 2024 18:37:42 +0300, Michael S wrote:
It would not surprise me if COBOL compiler was implemented and tested on
7080 then, while still on 7080, ported to emulated 705 and then sold to
users of real 705.
As I recall, IBM wasn’t part of CODASYL, and had no part in COBOL >development.
PL/I as its all-singing, all-dancing language for both business and >scientific use, for some time.
Eventually, of course, customers forced it to relent and offer COBOL.
Um, if you spent ten seconds looking at the 1960 COBOL report, you
would have found IBM listed as one of the contributors, and it
specifically lists the IBM Commercial Translator as one of thte
sources for COBOL.
I still have a copy of the Nevada COBOL compiler for the Commodore 64.
The C64 was a supercomputer compared to a 1401.
Um, if you spent ten seconds looking at the 1960 COBOL report, you would
have found IBM listed as one of the contributors, and it specifically
lists the IBM Commercial Translator as one of thte sources for COBOL.
Um, if you spent another ten seconds looking at the 1964 NPL report
... it would be glaringly obvious that the numbered data structures
and picture data come directly from COBOL.
Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
A C64 had 64K 8-bit bytes of RAM,
Indeed (tho, IIRC 8kB of those were hidden by the ROM, tho you could
change the mapping to hide different 8kB at different time and thus
access the full 64kB of RAM).
For doing highly useful things like changing the name of the BASIC
commands: It was possible to PEEK from the ROM and POKE to
the underlying RAM.
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
IIRC they held only ~170kB.
And were extremely slow - around 300 bytes per second, comparable
to a card reader. But fast loaders could improve on that up to 10 kB/s.
On Sun, 1 Sep 2024 00:52:34 +0000, MitchAlsup1 wrote:
Imagine trying to fit LLVM or GCC into a PDP/11 address space.
Pretty much from the moment the PDP-11 range was introduced, it was
obvious the 16-bit address space was going to be a significant limitation.
Thomas Koenig wrote:
Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
A C64 had 64K 8-bit bytes of RAM,
Indeed (tho, IIRC 8kB of those were hidden by the ROM, tho you could
change the mapping to hide different 8kB at different time and thus
access the full 64kB of RAM).
For doing highly useful things like changing the name of the BASIC
commands: It was possible to PEEK from the ROM and POKE to
the underlying RAM.
and the floppies held about 1.2MB but they were a
whole lot cheaper than 1311 disk packs.
IIRC they held only ~170kB.
And were extremely slow - around 300 bytes per second, comparable
to a card reader. But fast loaders could improve on that up to 10 kB/s.
They did that by reading every 1/2 sector, then reassembling after 2 rotations, instead of having to wait a full rotation between every
sector read?
It is true that the -11 died for lack of address space, but nobody I
know has ever come up with a good design where the address size is
bigger than the word size. You end up with segments as on the 286
or bank switching which is what later -11's did.
VAX stood for Virtual Address Extension. The key improvement was the
32 bit addresses. Everything else was a detail. Some of those details
were unfortunate but that's a different argument.
Also don't forget that back in that era everyone who had disks used
overlays. The IBM mainframe linkers had complicated ways to build
overlays and squeeze programs into 64K or whatever.
Even though the
address space was 16MB it was a long time before machines had that much
RAM and by then they'd added paging to make the physical memory size less relevant.
Thomas Koenig wrote:[Commodore 1541]
And were extremely slow - around 300 bytes per second, comparable
to a card reader. But fast loaders could improve on that up to 10 kB/s.
They did that by reading every 1/2 sector, then reassembling after 2 >rotations, instead of having to wait a full rotation between every
sector read?
Just reading up on this... it seems that there was a bug in the
6522 I/O controller where the shift register sometimes dropped
a bit due to a race condition, and the workaround which was
hastily put in place was very slow.
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Thomas Koenig wrote:[Commodore 1541]
And were extremely slow - around 300 bytes per second, comparableThey did that by reading every 1/2 sector, then reassembling after 2
to a card reader. But fast loaders could improve on that up to 10 kB/s.
rotations, instead of having to wait a full rotation between every
sector read?
No. The disk-access part of the disk drive was actually pretty cool, supporting variable data rate with a 6502. The variable data rate and group-code recording allowed getting 170KB on a single-sided
single-density disk, while FM disk controllers typically got 85KB, and Wozniak got 140KB. I don't remember what was done about sector
interleaving, but when you replaced the slow interface (see below) to
the computer with a fast one (I have Prologic DOS), the drive was up
to 28 times faster.
The problem is in the data transfer between the drive (which had its
own CPU) and the computer. They replaced the parallel IEEE-488
interface of the PET with a serial interface for cost reasons, and
then they botched the serial transfer. There were some contributing
factors, like the 6522 bug (the C64 had no 6522, though), and the
desired compatibility with the VIC-20 (which then did not happen
anyway: the Commodore 1540 (for the VIC-20) does not work unmodified
with the C64).
They probably could still have fixed the problem by giving some more
love to the firmware for the computer-to-disk-drive interface;
according to <https://www.c64-wiki.de/wiki/Schnelllader>, the fastest
serial fast-loaders were 15 times as fast as the serial routines in
the firmware. I guess that, when the C64 was designed, disk drives
were rare for computers with the price of the C64, so it did not seem
that important to make that interface fast.
- anton
On Fri, 30 Aug 2024 21:55:46 -0000 (UTC), John Levine wrote:
I still have a copy of the Nevada COBOL compiler for the Commodore 64.
The C64 was a supercomputer compared to a 1401.
I don’t think it was possible to fit a COBOL compiler into a 64K address >space -- not without heavy overlaying.
Also don't forget that back in that era everyone who had disks used overlays.
It is true that the -11 died for lack of address space, but nobody I
know has ever come up with a good design where the address size is
bigger than the word size. You end up with segments as on the 286 or
bank switching which is what later -11's did.
VAX stood for Virtual Address Extension. The key improvement was the 32
bit addresses. Everything else was a detail.
Also don't forget that back in that era everyone who had disks used
overlays.
The IBM mainframe linkers had complicated ways to build
overlays and squeeze programs into 64K or whatever.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I don’t think it was possible to fit a COBOL compiler into a 64K address >>space -- not without heavy overlaying.. The DEC one for the PDP-11 was a280kiB executable
Clearly it was possible.
https://commodore.software/downloads/download/211-application-manuals/13865-nevada-cobol-for-the-commodore-64
On Mon, 2 Sep 2024 10:11:48 -0000 (UTC), John Levine wrote:
The IBM mainframe linkers had complicated ways to build
overlays and squeeze programs into 64K or whatever.
So did DEC. They even introduced an additional version of their “Task Builder” (“TKB”, the RSX-11 linker, which also ran on RSTS/E) that could
build larger programs than the regular one: it was called the “Slow Task Builder” (“SLOTKB”).
I never quite learned how to build overlaid programs. I tried to help a friend with this once, by going over the documentation with him to try clarify how to construct his .ODL (“Overlay Description Language”) file, but I don’t think he got as far as doing an actual build before the
problem went away.
Turbo Pascal had overlay support for a while, I did try it once. Afair
it used a base layer thunking system where all cross-module calls passed through these thunks which would check that the correct overlay was
currently loaded, swapping it in if needed. This was OK for making a
bunch of calls from the base to a single overlay, but trashed horribly if/when you had calls between two different overlays.
It is true that, while the x86 segmentation system sucked, the PDP-11’s >address limitations made it look good by comparison.
VAX stood for Virtual Address Extension. The key improvement was the 32
bit addresses. Everything else was a detail.
Hey, don’t forget the kitchen-sink instruction set. ;)
Also don't forget that back in that era everyone who had disks used
overlays.
Or multiple passes, as separate executables. One of our lecturers got hold
of a copy of a Pascal compiler for our PDP-11. That consisted of two
separate programs, and it wasn’t quite a complete Pascal implementation. ...
The IBM mainframe linkers had complicated ways to build
overlays and squeeze programs into 64K or whatever. ...
I never quite learned how to build overlaid programs. ...
Good point. The dmr C compiler was four passes, a front end that turned source
code into trees, a code generator that wrote assembler, an optional optimizer >that rewrote the assembler, and the assembler which created the object file, >Each pass was a separate program.
I think they thought it was paging, but of course 8K pages were way
too large. So they overreacted and the Vax pages were 512 bytes
which were too small.
When I was working on the DOS version of Javelin we used a linker
that had overlays just like the mainframe linkers. I got it to work
and squeezed the code into about 1/3 the space it'd take otherwise
but it wasn't pleasant.
In article <vb9r4g$2o1f$1@gal.iecc.com>, johnl@taugh.com (John Levine)
wrote:
I think they thought it was paging, but of course 8K pages were way
too large. So they overreacted and the Vax pages were 512 bytes
which were too small.
Everyone seems to use 4K pages now, and that works well for
ordinary-size
programs in 32- and 64-bit address spaces. Bigger pages have been
available in many operating systems for a couple of decades, but they
seem to have been only used by programs that used memory in specialised
ways, like database indexes, and they were used on a per-process basis.
The interesting thing that's happening now is that Android 15, due for release soon, allows for devices that /only/ use 16K pages. Since
there's
no conventional paging, they presumably want to keep the page tables
from
eating too much RAM.
When I was working on the DOS version of Javelin we used a linker
that had overlays just like the mainframe linkers. I got it to work
and squeezed the code into about 1/3 the space it'd take otherwise
but it wasn't pleasant.
Was that PLink, the Phoenix linker? The project I worked on in 1986-87
used that for similar squashing.
John
In article <vb9r4g$2o1f$1@gal.iecc.com>, johnl@taugh.com (John Levine)
wrote:
I think they thought it was paging, but of course 8K pages were way
too large. So they overreacted and the Vax pages were 512 bytes
which were too small.
Everyone seems to use 4K pages now, and that works well for ordinary-size >programs in 32- and 64-bit address spaces. Bigger pages have been
available in many operating systems for a couple of decades, but they
seem to have been only used by programs that used memory in specialised
ways, like database indexes, and they were used on a per-process basis.
I think they thought it was paging, but of course 8K pages were way too large.
So they overreacted and the Vax pages were 512 bytes which were too
small.
Linux supports THP (Transparent Huge Pages) were the OS automatically
updates the translation table to coalsce smaller pages into larger blocks
on intel/amd/arm64 processors.
scott@slp53.sl.home (Scott Lurndal) writes:
Linux supports THP (Transparent Huge Pages) were the OS automatically >>updates the translation table to coalsce smaller pages into larger blocks >>on intel/amd/arm64 processors.
I have seen good effects from THP a number of years ago: I has seen a
matrix multiply program miss the TLB every time on one of its input
matrices. And then I ran it again, and the effect was gone.
Eventually I found out that this is due to THP, and I have to disable
THP if I want to demonstrate that effect to my students.
Recently, I thought that by using a 2MB size for an mmap()-allocated
memory block I would get THP on Linux. Unfortunately, mmap() does not
aligng 2MB blocks to 2MB boundaries, and if it does not, the block is
not eligible for THP. This was disappointing.
When I was working on the DOS version of Javelin we used a linker
that had overlays just like the mainframe linkers. I got it to work
and squeezed the code into about 1/3 the space it'd take otherwise
but it wasn't pleasant.
Was that PLink, the Phoenix linker? The project I worked on in 1986-87
used that for similar squashing.
scott@slp53.sl.home (Scott Lurndal) writes:
Linux supports THP (Transparent Huge Pages) were the OS automatically >>updates the translation table to coalsce smaller pages into larger blocks >>on intel/amd/arm64 processors.
I have seen good effects from THP a number of years ago: I has seen a
matrix multiply program miss the TLB every time on one of its input
matrices. And then I ran it again, and the effect was gone.
Eventually I found out that this is due to THP, and I have to disable
THP if I want to demonstrate that effect to my students.
Recently, I thought that by using a 2MB size for an mmap()-allocated
memory block I would get THP on Linux. Unfortunately, mmap() does not
aligng 2MB blocks to 2MB boundaries, and if it does not, the block is
not eligible for THP. This was disappointing.
- anton
Recently, I thought that by using a 2MB size for an mmap()-allocated
memory block I would get THP on Linux. Unfortunately, mmap() does not
aligng 2MB blocks to 2MB boundaries, and if it does not, the block is
not eligible for THP.
What I really wanted was relocatable segments I could load on demand or paging, but that was a lot to ask of an 8088 or a real mode 286.
On Thu, 5 Sep 2024 14:59:20 -0000 (UTC), John Levine wrote:
What I really wanted was relocatable segments I could load on demand or
paging, but that was a lot to ask of an 8088 or a real mode 286.
The Burroughs machines had swappable segments, I gather. So the
difference
between segmentation and paging came down to: the address space is still linear, but segments are variable-length, and pages are fixed-length.
It was soon generally agreed that fixed-length pages were easier to deal with.
On Sat, 7 Sep 2024 23:45:10 +0000, Lawrence D'Oliveiro wrote:
On Thu, 5 Sep 2024 14:59:20 -0000 (UTC), John Levine wrote:
What I really wanted was relocatable segments I could load on demand or
paging, but that was a lot to ask of an 8088 or a real mode 286.
The Burroughs machines had swappable segments, I gather. So the
difference
between segmentation and paging came down to: the address space is still
linear, but segments are variable-length, and pages are fixed-length.
It was soon generally agreed that fixed-length pages were easier to deal
with.
Largely because you could swap out part of a segment.
mitchalsup@aol.com (MitchAlsup1) writes:
On Sat, 7 Sep 2024 23:45:10 +0000, Lawrence D'Oliveiro wrote:
On Thu, 5 Sep 2024 14:59:20 -0000 (UTC), John Levine wrote:
What I really wanted was relocatable segments I could load on demand or >>>> paging, but that was a lot to ask of an 8088 or a real mode 286.
The Burroughs machines had swappable segments, I gather. So the
difference
between segmentation and paging came down to: the address space is still >>> linear, but segments are variable-length, and pages are fixed-length.
It was soon generally agreed that fixed-length pages were easier to deal >>> with.
Largely because you could swap out part of a segment.
The portion of the Burroughs MCP that deal with rolling out and rolling
in segments was called 'HIHO'.
As in Hi Ho, Hi Ho, it's off to work we go...
On Sat, 7 Sep 2024 23:45:10 +0000, Lawrence D'Oliveiro wrote:
On Thu, 5 Sep 2024 14:59:20 -0000 (UTC), John Levine wrote:
What I really wanted was relocatable segments I could load on demand
or paging, but that was a lot to ask of an 8088 or a real mode 286.
The Burroughs machines had swappable segments, I gather. So the
difference between segmentation and paging came down to: the address
space is still linear, but segments are variable-length, and pages are
fixed-length.
It was soon generally agreed that fixed-length pages were easier to
deal with.
Largely because you could swap out part of a segment.
Ever use EIEIO? ;^D
On Sun, 8 Sep 2024 00:45:05 +0000, MitchAlsup1 wrote:
On Sat, 7 Sep 2024 23:45:10 +0000, Lawrence D'Oliveiro wrote:
On Thu, 5 Sep 2024 14:59:20 -0000 (UTC), John Levine wrote:
What I really wanted was relocatable segments I could load on demand
or paging, but that was a lot to ask of an 8088 or a real mode 286.
The Burroughs machines had swappable segments, I gather. So the
difference between segmentation and paging came down to: the address
space is still linear, but segments are variable-length, and pages are
fixed-length.
It was soon generally agreed that fixed-length pages were easier to
deal with.
Largely because you could swap out part of a segment.
I’m assuming you meant “couldn’t” ...
With pages you can swap parts of a segment (page at a time)
with segmentation (only) you cannot swap parts of the segment.
On Mon, 9 Sep 2024 16:55:44 +0000, MitchAlsup1 wrote:
With pages you can swap parts of a segment (page at a time)
with segmentation (only) you cannot swap parts of the segment.
Mixing the two seems counterproductive, unless you have few, large
segments. Burroughs-style segments were smaller and more numerous than
that.
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
On Mon, 9 Sep 2024 16:55:44 +0000, MitchAlsup1 wrote:
With pages you can swap parts of a segment (page at a time) with
segmentation (only) you cannot swap parts of the segment.
Mixing the two seems counterproductive, unless you have few, large
segments. Burroughs-style segments were smaller and more numerous than
that.
Multics had paged segments. The segments were as I recall 18 bit word addressed so about a megabyte. The pages were perhaps 1K. It worked
pretty well.
An underappreciated advantage of paging is that the pages are all the
same size.
On Mon, 9 Sep 2024 16:55:44 +0000, MitchAlsup1 wrote:
With pages you can swap parts of a segment (page at a time)
with segmentation (only) you cannot swap parts of the segment.
Mixing the two seems counterproductive, unless you have few, large
segments. Burroughs-style segments were smaller and more numerous than
that.
On Tue, 10 Sep 2024 5:21:02 +0000, Lawrence D'Oliveiro wrote:
On Mon, 9 Sep 2024 16:55:44 +0000, MitchAlsup1 wrote:
With pages you can swap parts of a segment (page at a time) with
segmentation (only) you cannot swap parts of the segment.
Mixing the two seems counterproductive, unless you have few, large
segments. Burroughs-style segments were smaller and more numerous than
that.
I am not a fan of segmentation ...
I am not a fan of segmentation ...
Big segments versus small segments are quite different things.
On 9/11/2024 4:45 PM, Lawrence D'Oliveiro wrote:
Big segments versus small segments are quite different things.
Segments do different things. The aspect of segments that I liked in the 80286 ...
On Wed, 11 Sep 2024 18:24:24 -0700, Lars Poulsen wrote:
On 9/11/2024 4:45 PM, Lawrence D'Oliveiro wrote:
Big segments versus small segments are quite different things.
Segments do different things. The aspect of segments that I liked
in the 80286 ...
I wasn’t talking about x86-style “segmentation”, which was just a
hack to extend the address space in an awkward way. I was talking
about segments within a linear address space (and I assumed Mitch
was, too).
x86 Real mode segmentation is a hack to the address space. 80286
protected mode segmentation is something else. The only similarity
between the two is maximal size of segment is the same.
According to Michael S <already5chosen@yahoo.com>:
x86 Real mode segmentation is a hack to the address space. 80286
protected mode segmentation is something else. The only similarity
between the two is maximal size of segment is the same.
The 386 had 32 bit segments which should have made segmented code
practical and efficient
and allowed giant programs with lots of gigabyte segments. But Intel
shot themselves in
the foot. One problem was that loading a segment register to switch
segments remained
extremely slow so you still needed to write your program to avoid doing
so.
The other was that they mapped all the segments into a 32 bit linear
address space, and
paged the linear address space. That meant that the total size of all
active segments
had to fit into 4GB, at which point people said fine, whatever, set all
the segment
registers to map a single 4GB segment onto the linear address space and
used it as
a flat address machine.
x86 Real mode segmentation is a hack to the address space. 80286
protected mode segmentation is something else. The only similarity
between the two is maximal size of segment is the same.
The 386 had 32 bit segments which should have made segmented code
practical and efficient ...
The other was that they mapped all the segments into a 32 bit linear address space, and
paged the linear address space. That meant that the total size of all active segments
had to fit into 4GB, at which point people said fine, whatever, set all the segment
registers to map a single 4GB segment onto the linear address space and used it as
a flat address machine.
Intel's only other choice was to use more than 32-bits as the segment
base address and they had run out of bits.
IBM's insistence that OS/2 run on the 286 was a world-shaping mistake.
In article <20240912141925.000039f3@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:
x86 Real mode segmentation is a hack to the address space. 80286
protected mode segmentation is something else. The only similarity
between the two is maximal size of segment is the same.
Yup. 80286 segmentation is horribly complicated as compared to real mode,
and still gives you tiny segments. The only widespread OS that used it
AFAIK was OS/2 1.x, much to its disadvantage. IBM's insistence that OS/2
run on the 286 was a world-shaping mistake.
386 mode was far more useful, and survives to the present day.
John
The other choice would have been a page table per segment, like Multics
did.
On Thu, 12 Sep 2024 22:33 +0100 (BST), John Dallman wrote:
IBM's insistence that OS/2 run on the 286 was a world-shaping
mistake.
Apparently it was to fulfil a promise made to customers who bought
the original PC AT, that IBM would someday offer an OS to take
advantage of its protected-mode features.
Of course, by the time OS/2 shipped, nobody cared any more.
According to Michael S <already5chosen@yahoo.com>:
x86 Real mode segmentation is a hack to the address space. 80286
protected mode segmentation is something else. The only similarity
between the two is maximal size of segment is the same.
The 386 had 32 bit segments which should have made segmented code
practical and efficient and allowed giant programs with lots of
gigabyte segments. But Intel shot themselves in the foot. One problem
was that loading a segment register to switch segments remained
extremely slow so you still needed to write your program to avoid
doing so.
The other was that they mapped all the segments into a 32 bit linear
address space, and paged the linear address space. That meant that
the total size of all active segments had to fit into 4GB, at which
point people said fine, whatever, set all the segment registers to
map a single 4GB segment onto the linear address space and used it as
a flat address machine.
On Wed, 11 Sep 2024 21:34:25 +0000, MitchAlsup1 wrote:
I am not a fan of segmentation ...
On 9/11/2024 4:45 PM, Lawrence D'Oliveiro wrote:
Big segments versus small segments are quite different things.
Segments do different things. The aspect of segments that I liked in the >80286 was that they provided an excewllent mechanism for array bounds >checking. I would have loved having that option within a linear, paged >address space. But the languages in wide use at the time did not support >that.
On Wed, 11 Sep 2024 18:24:24 -0700, Lars Poulsen
<lars@beagle-ears.com> wrote:
On Wed, 11 Sep 2024 21:34:25 +0000, MitchAlsup1 wrote:
I am not a fan of segmentation ...
On 9/11/2024 4:45 PM, Lawrence D'Oliveiro wrote:
Big segments versus small segments are quite different things.
Segments do different things. The aspect of segments that I liked in the >>80286 was that they provided an excewllent mechanism for array bounds >>checking. I would have loved having that option within a linear, paged >>address space. But the languages in wide use at the time did not support >>that.
One (of many) problem with 286 segments was that there simply were not
enough available to be really useful. You need enough for every
object in the program.
Intel had the opportunity to do segmentation much better with the 386,
but they fumbled it badly: too slow - and no more segments possible
than with 286.
The modern way to state this is that:: "You need an unbounded number of segments"
On Fri, 13 Sep 2024 23:20:27 +0000, MitchAlsup1 wrote:
The modern way to state this is that:: "You need an unbounded number of
segments"
That would be true of small ones (Ă la Burroughs and I guess Multics/ >GE-645), not large ones.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 04:23:26 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,782 |