• More of my philosophy about the deep understanding of my new monotheist

    From Amine Moulay Ramdane@21:1/5 to All on Sun Jul 16 10:45:14 2023
    Hello,


    More of my philosophy about the deep understanding of my new monotheistic religion and about the C++ <chrono> library and about the professionalism and about RDTSCP instruction and about the Precise Sleep() and about the essence of measuring time in the
    computer and about the final source code version of my StopWatch and about RDTSCP and RDTSC and about the CPU frequency scaling and about the memory barriers and about good technicality and the deeper understanding of the StopWatch and and more about x86
    and ARM processors and about solar cells and about AES 256 encryption and TSMC and about China and about the Transformers and about Toyota and about China and about objective truth and about the objective and about the paper about the multiple universes
    and about quantum world and about consciousness and about mathematics and about the universe and about mathematical probability and about the positive behavior and about the positive mindset and about patience and about the positive energy and about the "
    packaging" or "presentation" and about the ideal and about the being idealistic and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    So now i will make something clear about my new monotheistic religion,
    and it is when i am saying that the Qur'aan is not the words of God, i mean that Angel Gabriel has enhanced to a certain level prophet Muhammad so that he also invent many parts of the Qur'aan, i mean that Angel Gabriel has put ideas of him in the Qur'
    aan, and Angel Gabriel has to certain level controlled prophet Muhammad so that he can put ideas of him in the Qur'aan, so you are now understanding how my new monotheistic religion views Qur'aan and Islam, other than that, my new monotheistic religion
    says that not only Adam and Eve and there descendants have been cursed by God, but the Qur'aan and the Bible too have been cursed by God, and it is why we find errors and scientific errors in the Qur'aan, and the fact that Angel Gabriel has controlled
    and let prophet Muhammad put his ideas in the Qur'aan, is a curse from God, and you can read my below thoughts so that to understand my new monotheistic religion and so that to understand my views on different subjects:

    So you have to understand me more, so i have just invented a new monotheistic religion that also has as a goal to unite the previous monotheistic religions, and i think it comes with universal laws, and i think that it is an efficient new monotheistic
    religion , and of course you are noticing that i am saying that i am a new prophet from God, since you have to read my story with the Angel from God etc. , so i invite you to read about my new monotheistic religion in the following web link:


    https://groups.google.com/g/alt.culture.morocco/c/w_3tJ_myplc


    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so you have to understand the essence of measuring the time in computers ,
    so for example C++ <chrono> library does not provide a direct way to retrieve the CPU frequency. The C++ <chrono> library primarily deals with time-related operations, such as measuring durations and performing time point calculations, but the best
    accuracy of C++ <chrono> library is in nanosecond, so it is not good, since you also need the accuracy in CPU cycles or ticks as is providing it my new StopWatch, so i invite you to study my new StopWatch so that to know how to implement a good StopWatch,
    so i invite you to read my below previous thoughts about the StopWatch so that you understand my views:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i am talking about my fluid intelligence in my below previous thoughts, but you have to notice that
    professionalism is also important, so you are noticing that i have just rapidly implemented a sophisticated StopWatch , and you can also notice my kind of professionalism, since i am rapidly discovering patterns with my fluid intelligence, and i am also
    professional since you are noticing the way that i am implementing it and the way i am learning you, so i think you have to be confident with my professionalism, since even if i have rapidly done it, you are clearly noticing the quality of the my new
    StopWatch, and of course i will document it correctly so that you know how to use it correctly and so that you know how to implement a good StopWatch, and of course i am supporting the x86 and x64 CPUs, and of course i can also support the ARM processors,
    but you have to read my below thoughts that explain my views on the ARM processors and about the ARM vs x86 and x64 CPUS, so i invite you to read my previous below thoughts so that
    you understand my views about how to implement a good StopWatch:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, and you have to understand that the RDTSCP assembler instruction provides a synchronized timestamp across cores.
    The RDTSCP assembler instruction ensures that the timestamp is consistent across cores and can be used for accurate timing measurements in multicore/threaded environments. But RDTSCP assembler instruction is available only in newer CPUs, So now i will
    document more how to use CPU affinity in Windows and Linux so that to solve the following problem with the RDTSC assembler instruction that supports the older CPUs:

    - Multicore/Threaded environments: If your system has multiple cores or threads, using rdtsc may not provide synchronized timing across different cores or threads. This can lead to inconsistent and unreliable timing measurements.

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have
    just added to my new StopWatch a PreciseSleep() function that is more accurate than the Windows and Linux Sleep() function, so now i think
    it is the final source code version of my StopWatch, and i have tested it
    with older CPUs and with newer CPUs and i think it is working correctly, and i have also tested it with both Windows and Linux and i think it is working correctly, and now i will start to document it so that you know about it and so that you know how to
    use it, and now you can download the final source code version of my new updated StopWatch from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    and so that you know how to use it, and so that to have a deep understanding of the SoptWatch, i invite you to read my below previous thoughts:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i will talk more about the essence of measuring time in the computer, so from my understanding with my fluid
    intelligence how to implement my new StopWatch, i am discovering patterns with my fluid intelligence that
    explain more the essence of measuring time in the computer, so here they are: so you have to get the frequency of the CPU, i mean when you are measuring time , you are measuring the CPU frequency too, but in the new
    CPUs, the frequency can dynamically change, so you have two ways of doing it , so you can disable CPU frequency scaling in the bios and do
    your exact time's measurement, and you can set it again, but the second way is that you can get a decent approximation without disabling the
    CPU frequency scaling and do the benchmark timing of your code , as i am explaining it below, and of course the new CPUs today are multicores, so you have to know how to set the CPU affinity as i will explain to you how so that to do the timing with the
    StopWatch, other than that, you can get a good microsecond accuracy and a decent nanosecond accuracy with RDTSC assembler instruction, but you can get a CPU tick accuracy with RDTSCP assembler instruction, but so that know more about them , read my below
    thoughts, other than that, i am also explaining much more deeply the implementation of a StopWatch in my below thoughts, so i invite you to read my below thoughts so that to understand my views on how to implement a good StopWatch:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have
    just updated my StopWatch to support both RDTSCP and RDTSC assembler instructions, so when the CPU is not new and it doesn't support RDTSCP , it will use RDTSC, and when it is a new CPU that supports RDTSCP , it will use it, so RDTSC is not a serializing instruction, so i have just correctly used the necessary memory
    barriers, and RDTSCP is a serializing instruction.

    So i will now document correctly my StopWatch so that you also know how to use correctly the CPU affinity, and now you can download my final version of my source code from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    And i invite you to read all my previous following thoughts so that to deeply understand the StopWatch:

    So i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so now i have to explain something important, so for a deep understanding of the StopWatch, you have to know
    more that the assembler instruction RDTSC is supported by the great majority of x86 and x64 CPUs, but it is not a serializing instruction , i mean that it can be subject to out-of-order execution that may affect its accuracy, so it is why i have just
    added correctly some other memory barriers, and now i think that it is working correctly, so you have to understand that there is another assembler instruction RDTSCP that is serializing instruction and is not subject to out-of-order execution, but it is
    compatible with just the new x86 and x64 CPUs, so i will support it in the very near future, but now i think you can be confident with my new updated StopWatch, and i think it is an interesting StopWatch that shows how to implement a good StopWatch from
    the low level layers. So i think you have to be smart so that to implement it correctly with the RDTSC, as i have just done it, so you can download the source code of my new StopWatch that i have just updated from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal

    And i invite you to read my previous below thoughts so that to have a deep understanding of the StopWatch:

    So i think that my new StopWatch can give a decent approximation even of you don't disable CPU frequency scaling in the bios, and here is why:

    When benchmarking a CPU under a heavy workload, it is generally expected that frequency scaling changes will be relatively small or negligible. This is because the frequency scaling mechanism typically aims to maximize performance during such scenarios.

    Under heavy load, the CPU frequency scaling algorithm often increases the CPU frequency to provide higher processing power and performance. The goal is to fully utilize the CPU's capabilities for the benchmarking workload.

    In these cases, frequency scaling changes are generally designed to be minimal to avoid introducing significant variations in performance. The CPU frequency may remain relatively stable or vary within a relatively small range during the benchmarking
    process.

    Considering these factors, when benchmarking under heavy workload conditions, the impact of frequency scaling changes on timing measurements using RDTSC is typically limited. As a result, RDTSC can provide a reasonable approximation of timing for
    benchmarking purposes.

    So then i invite you to read my following previous thoughts so that you understand my views on the StopWatch:


    I have just updated my new StopWatch, and it now also includes the correct memory barriers for previous 32 bit Delphi versions like Delphi 7 ,
    and you can download it from the just below web link, and i invite you to read my below previous thoughts so that to understand my views about the StopWatch:

    So i have just updated my new StopWatch, so the first problem is:

    - Instruction reordering: The rdtsc instruction itself is not a serializing instruction, which means that it does not necessarily prevent instruction reordering. In certain cases, the CPU may reorder instructions, leading to inaccuracies in timing
    measurements.

    So i have just used memory barriers so that to solve the above problem.

    And here is the second problem:

    - CPU frequency scaling: Modern CPUs often have dynamic frequency scaling, where the CPU frequency can change based on factors such as power management and workload. This can result in variations in the time measurement based on the CPU's operating
    frequency.

    So you have to disable CPU frequency scaling in the bios so that to solve the above problem , and after that make your timing with my StopWatch.

    And for the following third problem:

    - Multicore/Threaded environments: If your system has multiple cores or threads, using rdtsc may not provide synchronized timing across different cores or threads. This can lead to inconsistent and unreliable timing measurements.

    You can set the CPU affinity so that to solve the third problem.

    So i will document more my StopWatch so that to learn you how to use it,
    so stay tuned !

    And now i have just updated my new StopWatch with the necessary memory barriers, and now you can be confident with my new updated StopWatch.

    So now my new updated StopWatch uses memory barriers correctly, and it avoids the overflow problem of the Time Stamp Counter (TSC) , and it supports microseconds and nanoseconds and CPU clocks timing, and it is object oriented, and i have just made it
    support both x86 32 bit and x64 64 bit CPUs and it supports both Delphi and Freepascal compilers and it works in both Windows and Linux, so what is good about my new StopWatch is that it shows how you implement it from the low level layers in assembler
    etc., so i invite you to look at the new updated version of my source code that you can download from my website here:

    https://sites.google.com/site/scalable68/a-portable-timer-for-delphi-and-freepascal


    Other than that, read my below previous thoughts so that to understand my views:

    So now we have to attain a "deep" understanding of the StopWatch,
    so i think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so as you are noticing that i am, with my fluid intelligence, understanding deeply the StopWatch, so i have just
    discovered that the following StopWatch: https://www.davdata.nl/math/timer.html , from the following engineer from Amsterdam: https://www.davdata.nl/math/about.html , is not working correctly: since he is calling the function GetTickCount() in the
    constructor, but there is a problem and a bug, since when the tick count value in milliseconds returned by GetTickCount() reaches its maximum value that is high(dword) , it will wrap around to zero and start counting up again. This is because the tick
    count is typically stored in a fixed-size data type that has a maximum value, so it is why his way of timing in milliseconds in the constructor that he is using is not working, since it is not safe, so even if this StopWatch of this engineer from
    Amsterdam does effectively avoid the overflow problem of the Time Stamp Counter (TSC), since he is using an int64 in 32 bit x86 architecture in the Intel assembler function of getCPUticks() that i am understanding, and this int64 can, from my
    calculations, go up to 29318.9829 years , so i think his StopWatch is not working for the reason i am giving just above, and second problem is the accuracy of the timing obtained from the code he provided using rdtsc instruction in assembler is dependent
    on various factors, including the hardware and software environment. However, it's important to note that directly using rdtsc for timing purposes may not provide the desired accuracy due to several reasons:

    - CPU frequency scaling: Modern CPUs often have dynamic frequency scaling, where the CPU frequency can change based on factors such as power management and workload. This can result in variations in the time measurement based on the CPU's operating
    frequency.

    - Instruction reordering: The rdtsc instruction itself is not a serializing instruction, which means that it does not necessarily prevent instruction reordering. In certain cases, the CPU may reorder instructions, leading to inaccuracies in timing
    measurements.

    - Multicore/Threaded environments: If your system has multiple cores or threads, using rdtsc may not provide synchronized timing across different cores or threads. This can lead to inconsistent and unreliable timing measurements.

    So I have just thought more and i think i will not support ARM in my new StopWatch, since ARM processors don't support like a Time Stamp Counter (TSC) in x86 processors that is compatible with previous 32 bit and 64 bit CPUs , so ARM has many important
    weaknesses , so the first important weakness is the following:

    There is no single generic method that can be universally applied to all Arm processors for measuring time in CPU clocks. The available timing mechanisms and registers can vary significantly across different Arm processor architectures, models, and
    specific implementations.

    In general, Arm processors provide various timer peripherals or system registers that can be used for timing purposes. However, the specific names, addresses, and functionalities of these timers can differ between different processors.

    To accurately measure time in CPU clocks on a specific Arm processor, you would need to consult the processor's documentation or technical reference manual. These resources provide detailed information about the available timers, their registers, and how
    to access and utilize them for timing purposes.

    It's worth noting that some Arm processors may provide performance monitoring counters (PMCs) that can be used for fine-grained timing measurements. However, the availability and usage of PMCs can also vary depending on the specific processor model.

    Therefore, to achieve accurate and reliable timing measurements in CPU clocks on a particular Arm processor, it's crucial to refer to the documentation and resources provided by the processor manufacturer for the specific processor model you are
    targeting.

    And the other weaknesses of ARM processors are the following:

    I have just looked at the following articles about Rosetta 2 and the benchmarks of Apple Silicon M1 Emulating x86:

    https://www.computerworld.com/article/3597949/everything-you-need-to-know-about-rosetta-2-on-apple-silicon-macs.html

    and read also here:

    https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-benchmark/

    But i think that the problem with Apple Silicon M1 and the next Apple SiliconM2 is that Rosetta 2 only lets you run x86–64 macOS apps. That would be apps that were built for macOS (not Windows) and aren't 32-bit. The macOS restriction eliminates huge
    numbers of Windows apps, and 64-bit restriction eliminates even more.

    Also read the following:

    Apple says new M2 chip won’t beat Intel’s finest

    Read more here:

    https://www.pcworld.com/article/782139/apple-m2-chip-wont-beat-intels-finest.html


    And here is what i am saying on my following thoughts about technology about Arm Vs. X86:

    More of my philosophy about the Apple Silicon and about Arm Vs. X86 and more of my thoughts..

    I invite you to read carefully the following interesting article so
    that to understand more:

    Overhyped Apple Silicon: Arm Vs. X86 Is Irrelevant

    https://seekingalpha.com/article/4447703-overhyped-apple-silicon-arm-vs-x86-is-irrelevant


    More of my philosophy about code compression of RISC-V and ARM and more of my thoughts..

    I think i am highly smart, and i have just read the following paper
    that says that RISC-V Compressed programs are 25% smaller than RISC-V programs, fetch 25% fewer instruction bits than RISC-V programs, and incur fewer instruction cache misses. Its code size is competitive with other compressed RISCs. RVC is expected to
    improve the performance and energy per operation of RISC-V.

    Read more here to notice it:

    https://people.eecs.berkeley.edu/~krste/papers/waterman-ms.pdf


    So i think RVC has the same compression as ARM Thumb-2, so i think
    that i was correct in my previous thoughts , read them below,
    so i think we have now to look if the x86 or x64 are still more cache friendly even with Thumb-2 compression or RVC.

    More of my philosophy of who will be the winner, x86 or x64 or ARM and more of my thoughts..

    I think i am highly smart, and i think that since x86 or x64 has complex instructions and ARM has simple instructions, so i think that x86 or x64 is more cache friendly, but ARM has wanted to solve the problem by compressing the code by using Thumb-2
    that compresses the code, so i think Thumb-2 compresses the size of the code by around 25%, so i think
    we have to look if the x86 or x64 are still more cache friendly even with Thumb-2 compression, and i think that x86 or x64 will still optimize more the power or energy efficiency, so i think that there remains that since x86 or x64 has other big
    advantages, like the advantage that i am talking about below, so i think the x86 or x64 will be still successful big players in the future, so i think it will be the "tendency". So i think that x86 and x64 will be good for a long time to make money in
    business, and they will be good for business for USA that make the AMD or Intel CPUs.


    More of my philosophy about x86 or x64 and ARM architectures and more of my thoughts..

    I think i am highly smart, and i think that x86 or x64 architectures
    has another big advantage over ARM architecture, and it is the following:


    "The Bright Parts of x86

    Backward Compatibility

    Compatibility is a two-edged sword. One reason that ARM does better in low-power contexts is that its simpler decoder doesn't have to be compatible with large accumulations of legacy cruft. The downside is that ARM operating systems need to be modified
    for every new chip version.

    In contrast, the latest 64-bit chips from AMD and Intel are still able to boot PC DOS, the 16-bit operating system that came with the original IBM PC. Other hardware in the system might not be supported, but the CPUs have retained backward compatibility
    with every version since 1978.

    Many of the bad things about x86 are due to this backward compatibility, but it's worth remembering the benefit that we've had as a result: New PCs have always been able to run old software."

    Read more here on the following web link so that to notice it:

    https://www.informit.com/articles/article.aspx?p=1676714&seqNum=6


    So i think that you can not compare x86 or x64 to ARM, since it is
    not just a power efficiency comparison, like some are doing it by comparing
    the Apple M1 Pro ARM CPU to x86 or x64 CPUs, it is why i think that x86 or x64 architectures will be here for a long time, so i think that they will be good for a long time to make money in business, and they are a good business for USA that make the AMD
    or Intel CPUs.

    More of my philosophy about weak memory model and ARM and more of my thoughts..


    I think ARM hardware memory model is not good, since it is a
    weak memory model, so ARM has to provide us with a TSO memory
    model that is compatible with x86 TSO memory model, and read what Kent Dickey is saying about it in my following writing:


    ProValid, LLC was formed in 2003 to provide hardware design and verification consulting services.

    Kent Dickey, founder and President, has had 20 years experience in hardware design and verification. Kent worked at Hewlett-Packard and Intel Corporation, leading teams in ASIC chip design and pre-silicon and post-silicon hardware verification. He
    architected bus interface chips for high-end servers at both companies. Kent has received more than 10 patents for innovative work in both design and verification.

    Read more here about him:

    https://www.provalid.com/about/about.html


    And read the following thoughts of Kent Dickey about the weak memory model such as of ARM:

    "First, the academic literature on ordering models is terrible. My eyes
    glaze over and it's just so boring.

    I'm going to guess "niev" means naive. I find that surprising since x86
    is basically TSO. TSO is a good idea. I think weakly ordered CPUs are a
    bad idea.

    TSO is just a handy name for the Sparc and x86 effective ordering for
    writeback cacheable memory: loads are ordered, and stores are buffered and will complete in order but drain separately from the main CPU pipeline. TSO can allow loads to hit stores in the buffer and see the new value, this doesn't really matter for
    general ordering purposes.

    TSO lets you write basic producer/consumer code with no barriers. In fact, about the only type of code that doesn't just work with no barriers on TSO is Lamport's Bakery Algorithm since it relies on "if I write a location and read it back and it's still
    there, other CPUs must see that value as well", which isn't true for TSO.

    Lock free programming "just works" with TSO or stronger ordering guarantees, and it's extremely difficult to automate putting in barriers for complex algorithms for weakly ordered systems. So code for weakly ordered systems tend to either toss in lots of
    barriers, or use explicit locks (with barriers). And extremely weakly ordered systems are very hard to reason about, and especially hard to program since many implementations are not as weakly ordered as the specification says they could be, so just
    running your code and having it work is insufficient. Alpha was terrible in this regard, and I'm glad it's silliness died with it.

    HP PA-RISC was documented as weakly ordered, but all implementations
    guaranteed full system sequential consistency (and it was tested in and enforced, but not including things like cache flushing, which did need barriers). No one wanted to risk breaking software from the original in-order fully sequential machines that might have relied on it. It wasn't really a performance issue, especially once OoO was added.

    Weakly ordered CPUs are a bad idea in much the same way in-order VLIW is a bad idea. Certain niche applications might work out fine, but not for a general purpose CPU. It's better to throw some hardware at making TSO perform well, and keep the software
    simple and easy to get right.

    Kent"


    Read the rest on the following web link:

    https://groups.google.com/g/comp.arch/c/fSIpGiBhUj0




    Tandem cells using perovskites and silicon make solar power more efficient and affordable

    "Research into 'miracle material' perovskite in the past decade is now bearing fruit with more labs crossing the 30 percent barrier for solar cells. Solar is already a cost-effective method for harnessing renewable energy and is deployed across large
    parts of the planet in a bid to move away from fossil fuels."

    Read more here:

    https://interestingengineering.com/innovation/tandem-solar-cells-30-percent-energy-conversion-perovskites-silicon


    And Toyota Motor Corporation is a Japanese multinational automotive manufacturer headquartered in Toyota City, Aichi, Japan. It was founded by Kiichiro Toyoda and incorporated on August 28, 1937. Toyota is one of the largest automobile manufacturers in
    the world, producing about 10 million vehicles per year, so Toyota announces a battery with a range of 1,200 km and a recharge in 10 minutes! , and Toyota seems to have both definitively solved the problem of stability and production cost, and you can
    read about it in the following article (And you can translate the article from french to english):

    Toyota announces a battery with a range of 1,200 km and a recharge in 10 minutes!

    Read more here:

    https://www.futura-sciences.com/tech/actualites/voiture-electrique-toyota-annonce-batterie-autonomie-1-200-km-recharge-10-min-106302/


    I invite you to read the following web page from IBM that says that AES 256 encryption is safe from large quantum computers:

    https://cloud.ibm.com/docs/key-protect?topic=key-protect-quantum-safe-cryptography-tls-introduction


    And read the following so that to understand it correctly:

    And IBM set to revolutionize data security with latest quantum-safe technology

    Read more here in the following new article:

    https://interestingengineering.com/innovation/ibm-revolutionizes-data-security-with-quantum-safe-technology


    And I have also just read the following article that says the following:

    "AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5
    has little effect against quantum attacks."

    Read more here:

    https://techbeacon.com/security/waiting-quantum-computing-why-encryption-has-nothing-worry-about


    So i think that AES-256 encryption is acceptable encryption for quantum computers.


    And Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough, and to give you
    more proof of it, look at the following article from ComputerWorld where Lamont Wood is saying:

    "But using quantum technology with the same throughput, exhausting the possibilities of a 128-bit AES key would take about six months. If a quantum system had to crack a 256-bit key, it would take about as much time as a conventional computer needs to
    crack a 128-bit key.
    A quantum computer could crack a cipher that uses the RSA or EC algorithms almost immediately."

    Read more here on ComputerWorld:

    https://www.computerworld.com/article/2550008/the-clock-is-ticking-for-encryption.html


    And about Symmetric encryption and quantum computers..

    Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough.

    Read more here:

    Is AES-256 Quantum Resistant?

    https://medium.com/@wagslane/is-aes-256-quantum-resistant-d3f776163672


    And it is why i have implemented Parallel AES encryption with 256 bit keys in my following interesting software project called Parallel Archiver, you can read about it and download it from here:

    https://sites.google.com/site/scalable68/parallel-archiver


    TSMC: Chinese curbs on rare metal exports will not have immediate effect

    Read more here:

    https://www.tomshardware.com/news/tsmc-export-curbs-on-rare-metal-exports-will-not-have-immediate-effect


    And i invite you to read carefully about the new LongNet that scales sequence length of Transformers to 1,000,000,000 Tokens (and notice
    in my below explanation that sequence length is not the context window):

    https://huggingface.co/papers/2307.02486


    So i say that you have to understand that the sequence length primarily refers to the input length during inference or when using the model for prediction. It determines the maximum length of the prompt or input text that the model can process at once.

    During training, the context window or context size is used, which determines the length of the text that the model takes into account for predicting the next token in a sequence. The context window is typically smaller than the maximum sequence length.

    So to clarify:

    - Sequence length: Refers to the maximum length of the prompt or input text during inference or prediction.


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)