• More of my philosophy about the token limit and about GPT-4 and about G

    From Amine Moulay Ramdane@21:1/5 to All on Wed May 17 10:22:44 2023
    Hello,



    More of my philosophy about the token limit and about GPT-4 and about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers
    and about technology and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    GPT-4 has a maximum token limit of 32,000 (equivalent to 25,000 words),
    which is a significant increase from GPT-3.5’s 4,000 tokens (equivalent to 3,125 words), and having more tokens in a large language model like GPT-4 provides several benefits, and here they are:

    - Increased Context: More tokens allow the model to consider a larger context when generating responses. This can lead to a better understanding of complex queries and enable more accurate and relevant responses.

    - Longer Conversations: With more tokens, the model can handle longer conversations without truncating or omitting important information. This is particularly useful when dealing with multi-turn conversations or discussions that require a deep
    understanding of the context.

    - Enhanced Coherence: Additional tokens enable the model to maintain a coherent and consistent narrative throughout a conversation. It helps avoid abrupt changes in topic or tone and allows for smoother interactions with users.

    - Improved Accuracy: Having more tokens allows the model to capture finer details and nuances in language. It can lead to more accurate and precise responses, resulting in a higher quality conversational experience.

    - Expanded Knowledge Base: By accommodating more tokens, the model can incorporate a larger knowledge base during training, which can enhance its understanding of various topics and domains. This can result in more informed and insightful responses to a
    wide range of queries.

    - Reduced Information Loss: When a model is constrained by a token limit, it may need to truncate or remove parts of the input text, leading to potential loss of information. Having more tokens minimizes the need for such truncation, helping to preserve
    the integrity of the input and generate more accurate responses.

    - Support for Richer Formatting: Increased token capacity allows for more extensive use of formatting, such as HTML tags or other markup language, to provide visually appealing and structured responses.


    It's important to note that while having more tokens can bring these benefits, it also comes with computational limitations and increased inference time. Finding a balance between token count and computational resources is crucial for practical
    deployment of language models.


    More of my philosophy about GPT-4 and about the value and about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts.
    .



    So i think that GPT-4 has the following limitations:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.

    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    So it is why i think that you can not learn computer programming or software development from GPT-4 , but you have to learn from the ones that know how to do computer programming or software development, so i think that even with GPT-4 or the soon coming
    GPT-5, you can still extract good value from the limitations of GPT-4 or the soon coming GPT-5 and do business with it. So i think we have still to be optimistic about it.


    And i think ChatGPT has another problem, and it is that the generated content can infringe on the copyright of existing works. This could occur if ChatGPT generates content similar to existing copyrighted material of the data on wich it has been trained.
    So you have to be careful, since it can hurt your business, but you have to know that copyright does not protect ideas, concepts, systems, or methods of doing something. But copyright law for example protects the expression of ideas rather than the ideas
    themselves. In other words, copyright law protects the specific form in which an idea is expressed, rather than the underlying idea or concept. And you have to also know that there is another problem with ChatGPT and it is that it can generate an
    invention, but it could be argued that the creators of the model,
    OpenAI, should be able to patent the invention. However, it could also be argued, that the source material used to train the model should be considered as prior art, meaning that the invention would not be considered new and therefore not patentable.


    And I have just looked at the following new video of techlead that i know, and i invite you to look at it:

    Why ChatGPT AI Will Destroy Programmers.

    https://www.youtube.com/watch?v=U1flF5WOeNc



    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that the above techlead in the above video is not thinking correctly since he is saying that software
    programming is dying, since i say that software programming is not dying, since the future Apps are for example Metaverse, and of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s
    Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what

    And also the other future Apps are for example the one that use data to be smart in real time (Read about it here: https://www.cxotoday.com/cxo-bytes/data-driven-smart-apps-are-the-future/ ) , and i also say that software programming is not dying since
    GPT-4 and such artificial intelligence will replace just a small percentage of software programmers, since software programming also needs to care about accuracy and reliability, so you have to look at the following most important limitations of GPT-4
    and such artificial intelligence so that to notice it:


    1- GPT-4 lacks on the understanding of context, since GPT-4 was trained
    on large amounts of text data, but it has not the ability to
    understand the context of the text. This means that it can generate
    coherent sentences, but they may not always make sense in the context
    of the conversation.


    2- And GPT-4 is limited in its ability to generate creative or original content. GPT-4 is trained on existing text data, so it is not able to
    generate new ideas or concepts. This means that GPT-4 is not suitable
    for tasks that require creativity or originality.


    And i invite you to read the following article so that to understand more about GPT-4:

    Exploring the Limitations and Potential of OpenAI’s GPT-4

    https://ts2.space/en/exploring-the-limitations-and-potential-of-openais-gpt-4/



    And more of my philosophy about the objective function and about artificial intelligence and about my philosophy and more of my thoughts..

    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding more GPT-4 with my fluid intelligence, so
    i think that GPT-4 uses Deep learning and it uses the mechanism of self-attention so that to understand the context and it uses Reinforcement learning from human feedback that uses a reward mechanism so that
    to learn from the feedback of the people that are using GPT-4 so that
    to ensure that this or that data is truth or not etc. but i think that
    the problem of GPT-4 is that it needs a lot of data and it is the first weakness, and it is dependent on the data and the quality of data and it is the second weakness of GPT-4, so in unsupervised learning that is
    used so that to train GPT-4 on the massive data, the quality of the data is not known with certitude, so it is a weakness of artificial intelligence such as GPT-4, and about the objective function that guides, so i think that it is the the patterns that
    are found by the neural network and that are learned by the neural network of GPT-4 that play the role of the objective function that guides, so the objective function comes from the massive data on wich GPT-4 has been trained, and i think it is also a
    weakness of GPT-4, since i think that what is missing is what explains my new model of what is consciousness , since the meaning from human consciousness also plays the role of the objective function , so it makes it much better than artificial
    intelligence and it makes it that it needs much less data, so it is why the human brain needs much less data than artificial intelligence such as GPT-4. So i invite you to read my following previous thoughts so that to understand my views:



    More of my philosophy about artificial intelligence such as GPT-4 and about my philosophy and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become
    powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be
    capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial
    intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level
    layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can
    not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really
    near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of
    how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the
    future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is
    lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:


    https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo




    And read my following previous thoughts:



    More of my philosophy about large language model (LLM) such as GPT-4 and about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I think large language model (LLM) such as GPT-4 , or the soon coming GPT-5 , will be enhanced much more, since the data on wich they are trained is exponentially growing in size, and so that to understand more about the statistics that shows the
    exponential growth of data, i invite you to read the following article:


    "As we move from an oil-driven era to a data-driven age that is shaped by the rapid digital transformation of global industries (also known as the “Fourth Industrial Revolution”), data is increasingly becoming voluminous, varied and valuable. The
    global datasphere has expanded and continues to grow at a breakneck speed. In a November 2018 white paper “Data Age 2025”, research firm IDC predicted that the global datasphere could increase from 33 zettabytes in 2018 to 175 zettabytes by 2025 (
    Chart 1)."


    Read more here:

    https://insights.nikkoam.com/articles/2019/12/whats_causing_the_exponential



    Also large language model (LLM) such as GPT-4 are improving in causal reasoning, and here is the proof of it on the following paper:


    "A large language model (LLM) such as GPT-4 can fail on some queries while succeeding to provide causal reasoning in others. What is remarkable is how few times that such errors happen: our evaluation finds that on average, large language models (LLMs)
    such as GPT-4 can outperform state-of-the-art causal algorithms in graph discovery and counterfactual inference, and can systematize nebulous concepts like necessity and sufficiency of cause by operating solely on natural language input."


    And read more here on the following paper so that to understand it:


    Causal reasoning and Large Language Models: Opening a new frontier for causality

    https://arxiv.org/abs/2305.00050?fbclid=IwAR3bvgnYMiB8F8lirnhNmG4RDRmWqDrWetEylgDUNO9f0DP2dWeKdTkQay4



    More of my philosophy about real-time systems and about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    I invite you to look at Kithara RealTime Suite from Germany that is a modular real-time extension for Windows and that looks like RTX64 from USA ( You can read about it here: https://www.intervalzero.com/en-products/en-rtx64/ ), Kithara RealTime Suite
    also supports Delphi programming for it and it also supports C++ and C# , read about Kithara RealTime Suite that is a modular real-time extension for Windows here:


    https://kithara.com/en/products/real-time-suite



    And i am also currently working with Kithara RealTime Suite that is a modular real-time extension for Windows and i am also currently implementing some interesting real-time libraries and programs in Delphi for it. And read my following thoughts so that
    to understand more my views about it:


    Delphi Integration: Kithara RealTime Suite provides seamless integration with Delphi, allowing developers to leverage the Delphi IDE and its extensive component library for real-time development.

    And yes, Delphi can be used for real-time and real-time critical system programming, and so that to enhance the safety and reliability of your Delphi code, here are some suggestions:


    - Adhere to Best Practices: Follow software engineering best practices such as modular design, code reuse, and encapsulation. This can help improve code readability, maintainability, and reduce the potential for errors.

    - Apply Defensive Programming Techniques: Implement defensive programming techniques such as input validation, error handling, and boundary checks. This can help prevent unexpected behaviors, improve robustness, and enhance the safety of your code.

    - Use Code Reviews and Testing: Conduct thorough code reviews to identify and address potential issues. Implement comprehensive testing methodologies, including unit testing, integration testing, and regression testing, to catch bugs and ensure the
    correctness of your code.

    - Apply Design Patterns: Utilize design patterns that promote safety and reliability, such as the Observer pattern, State pattern, or Command pattern. These patterns can help structure your code in a more modular and maintainable way.

    - Employ Static Code Analysis Tools: Utilize static code analysis tools that are compatible with Delphi. These tools can help identify potential issues, enforce coding guidelines, and detect common programming mistakes.

    - Consider Formal Methods: While Delphi may not directly support SPARK or formal verification, you can use external tools or libraries to apply formal methods to critical parts of your codebase. Formal methods involve mathematical verification techniques
    to prove the correctness of software.

    - Documentation and Code Comments: Maintain thorough documentation and meaningful code comments. This can enhance code comprehension, facilitate future maintenance, and aid in understanding the safety measures employed in your code.


    By implementing these practices, you can improve the safety and reliability of your Delphi codebase.


    And you can read about the new version of Delphi and buy it from the following website:

    https://www.embarcadero.com/products/delphi


    More of my philosophy about Zettascale and Exascale supercomputers and about quantum computers and about technology and more of my thoughts..


    "Businesses in, for example, finance, logistics, and energy, will benefit hugely from quantum’s applications for optimization, simulation, and forecasting. And one potential application of quantum computing that is exciting is in drug discovery and
    diagnostics. These quantum advantages largely depend on quantum’s elevated computing power which could enable physicians and researchers to solve problems which are otherwise intractable with classical computers. Notably, this includes the potential to
    simulate very large, complex molecules, which are actually quantum systems, meaning that a quantum computer can more effectively predict the properties, behaviours and interactions of those molecules at an atomic level. This has huge implications for
    identifying new drug candidates, the future of personalised medicine, and the ability to assess for abnormalities in tissues which cannot be discerned with the naked eye – or with current computational methods."

    Read more here:

    https://www.linkedin.com/pulse/quantum-its-all-computing-stuart-woods/?utm_source=share&utm_medium=member_ios&utm_campaign=share_via&fbclid=IwAR1JC8rIUmzUvD-YcRFRc-iEwRdZZ2rRYfHZcgRih8u8Lm2NO_RRV36WmHI


    And i invite you to read the following interesting article:


    Quantum computers are coming. Get ready for them to change everything

    https://www.zdnet.com/article/quantum-computers-are-coming-get-ready-for-them-to-change-everything/


    And there is also another way of attaining Zettascale and it is with Quantum-classical hybrid systems , and read about it here:

    PREPARING FOR UPCOMING HYBRID CLASSICAL-QUANTUM COMPUTE

    https://www.nextplatform.com/2023/03/23/preparing-for-upcoming-hybrid-classical-quantum-compute/


    And of course we need Zettascale or ZettaFLOP so that Metaverse be possible, and as you notice the article below, with of Intel’s Raja Koduri talking, says that the architecture is possible and it will be ready around 2027 or 2030 and it is the
    following:

    An architecture jump of 16x, power and thermals are 2x, data movement is 3x, and process is 5x. That is about 500x, on top of the two ExaFLOP Aurora system, gets to a ZettaFLOP.

    Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

    Read more here:

    https://www.anandtech.com/show/17298/interview-with-intels-raja-koduri-zettascale-or-zettaflop-metaverse-what


    More of philosophy about China and Exascale supercomputers..

    China has already reached Exascale - on two separate systems

    Read more here:

    https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/


    And in USA Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance

    Read more here:

    https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance


    But Exascale or Zettascale supercomputers will also allow to construct an accurate map of the brain that allows to "reverse" engineer or understand the brain, read the following so that to notice it:

    “If we don’t improve today’s technology, the compute time for a whole mouse brain would be something like 1,000,000 days of work on current supercomputers. Using all of Aurora, if everything worked beautifully,
    it could still take 1,000 days.” Nicola Ferrier, Argonne senior computer scientist

    Read more here so that to understand:

    https://www.anl.gov/article/preparing-for-exascale-argonnes-aurora-supercomputer-to-drive-brain-map-construction


    Also Exascale supercomputers will allow researchers to tackle problems
    which were impossible to simulate using the previous generation of
    machines, due to the massive amounts of data and calculations involved.

    Small modular nuclear reactor (SMR) design, wind farm optimization and
    cancer drug discovery are just a few of the applications that are
    priorities of the U.S. Department of Energy (DOE) Exascale Computing
    Project. The outcomes of this project will have a broad impact and
    promise to fundamentally change society, both in the U.S. and abroad.

    Read more here:

    https://www.cbc.ca/news/opinion/opinion-exascale-computing-1.5382505


    Also the goal of delivering safe, abundant, cheap energy from fusion is
    just one of many challenges in which exascale computing’s power may
    prove decisive. That’s the hope and expectation. Also to know more about
    the other benefits of using Exascale computing power, read more here:

    https://www.hpcwire.com/2019/05/07/ten-great-reasons-among-many-more-to-build-the-1-5-exaflops-frontier/


    And I have just said, before reading the following article, the following about Intel company from USA:


    "And you have to know that in the quarter, Intel’s sales across all product lines fell by 36.2 percent to $11.721 billion, but i think that Intel CEO Pat Gelsinger is still optimistic and he insists that Intel plan to grow a whopping foundry business
    will pay off, and he also believes that the PC market will rebound at some point, and Intel CEO Pat Gelsinger is also optimistic about the process and server processor roadmaps, read more here about it: https://www.nextplatform.com/2023/03/31/finally-
    some-good-news-for-the-intel-xeon-cpu-roadmap/, so i think we have to be optimistic about Intel , and i invite you to read the other following article so that to understand more:


    https://www.theregister.com/2023/04/28/intel_28b_loss/ "



    So you can read carefully the following new article so that you understand more about this subject of the the recovering of the AMD and Intel CPU Market:

    AMD and Intel CPU Market Share Report: Recovery on the Horizon


    https://www.tomshardware.com/news/amd-and-intel-cpu-market-share-report-recovery-looms-on-the-horizon


    And of course, i have just talked about quantum computers in my below previous thoughts, but i think i have to explain something important so that you understand: So for a parallel computer, we need to have one billion different processors. But in a
    quantum computer, a single register can perform a billion computations since a qubit of a register of a quantum computer can be both in two states 1 and 0, this is known as quantum parallelism, but connecting quantum computing to "Moore's Law" is sort of
    foolish -- it's not an all-purpose technique for faster computers, but a limited technique that makes certain types of specialized problems easier, while leaving most of the things we actually use computers for unaffected.


    So I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, so i have just talked about artificial intelligence and about my new model of what is consciousness, read about it in my below thoughts, and now i
    will talk about quantum computing, so i have just looked at the following video about the powerful parallel quantum computer of IBM from USA that will be soon available in the cloud, and i invite you to look at it:

    Quantum Computing: Now widely available!

    https://www.youtube.com/watch?v=laqpfQ8-jFI


    But i have just read the following paper and it is saying that the powerful Quantum algorithms for matrix operations and linear systems of equations are available, so as you notice in the following paper that many matrix operations and also the linear
    systems of equations solver can be done in a quantum computer, read about it here in the following paper:

    Quantum algorithms for matrix operations and linear systems of equations

    Read more here:

    https://arxiv.org/pdf/2202.04888.pdf


    So i think that IBM will do the same for there powerful parallel quantum computer that will be available in the cloud, but i think that you will have to pay for it since i think it will be commercial, but i think that there is a weakness with this kind
    of configuration of the powerful quantum computer from IBM, since the cost of bandwidth of internet is exponentially decreasing , but the latency of accessing the internet is not, so it is why i think that people will still use classical computers for
    many mathematical applications that uses mathematical operations such as matrix operations and linear systems of equations etc. and that needs a much faster latency, so i think that the business of classical computers will still be great in the future
    even with the coming of the powerful parallel quantum computer of IBM, so as you notice this kind of business of quantum computers is also dependent on the latency of accessing internet, and speaking about latency , i invite you to look at the following
    interesting video about the latency numbers programmer should know:

    Latency numbers programmer should know

    https://www.youtube.com/watch?v=FqR5vESuKe0



    And IBM set to revolutionize data security with latest quantum-safe technology

    Read more here in the following new article:

    https://interestingengineering.com/innovation/ibm-revolutionizes-data-security-with-quantum-safe-technology


    And I have also just read the following article that says the following:

    "AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5
    has little effect against quantum attacks."

    Read more here:

    https://techbeacon.com/security/waiting-quantum-computing-why-encryption-has-nothing-worry-about


    So i think that AES-256 encryption is acceptable encryption for quantum computers.


    And Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough, and to give you
    more proof of it, look at the following article from ComputerWorld where Lamont Wood is saying:

    "But using quantum technology with the same throughput, exhausting the possibilities of a 128-bit AES key would take about six months. If a quantum system had to crack a 256-bit key, it would take about as much time as a conventional computer needs to
    crack a 128-bit key.
    A quantum computer could crack a cipher that uses the RSA or EC algorithms almost immediately."

    Read more here on ComputerWorld:

    https://www.computerworld.com/article/2550008/the-clock-is-ticking-for-encryption.html


    And about Symmetric encryption and quantum computers..

    Symmetric encryption, or more specifically AES-256, is believed to be quantum resistant. That means that quantum computers are not expected to be able to reduce the attack time enough to be effective if the key sizes are large enough.

    Read more here:

    Is AES-256 Quantum Resistant?

    https://medium.com/@wagslane/is-aes-256-quantum-resistant-d3f776163672


    And it is why i have implemented Parallel AES encryption with 256 bit keys in my following interesting software project called Parallel Archiver, you can read about it and download it from here:

    https://sites.google.com/site/scalable68/parallel-archiver



    More of my philosophy about 3DS ECC RDIMMs and more of my thoughts..


    So there is still one important thing that i want to explain ,
    so as you notice i have just advised you to buy from the below
    cost-effective AMD EPYC Genoa processors with the below good
    motherboard for them from Supermicro, and Supermicro motherboards are known for their high quality, reliability, and flexibility, and are used by many companies in various industries, but of course i have explained
    that the advantage is also that the it supports 12 memory channels,
    but not only that , but the Supermicro motherboard that i am advising you below , supports both DDR5 memory that is not fully ECC and it supports 3DS ECC RDIMM memory that is fully ECC (Error-Correcting Code), but i think that 3DS ECC RDIMM is
    advantageous , since you have to be professional and use 3DS ECC RDIMMs that are "reliable", so you have to read my following thoughts so that to notice it:


    3DS ECC RDIMMs are fully ECC (Error-Correcting Code) memory modules.

    "3DS" stands for "Three Dimensional Stacking" and refers to the technology used in these memory modules to stack multiple layers of memory cells on top of each other. This allows for higher memory densities and capacities in a single module.

    "ECC" stands for "Error-Correcting Code" and is a type of memory technology that can detect and correct errors that may occur when data is stored in memory. ECC memory is commonly used in servers and other mission-critical systems where data integrity is
    of utmost importance.

    Therefore, 3DS ECC RDIMMs combine the benefits of both 3D stacking and ECC technology to provide high-density, high-capacity memory modules with built-in error correction capabilities.


    And you have to read the following article that says the following:

    "On-die ECC: The presence of on-die ECC on DDR5 memory has been the subject of many discussions and a lot of confusion among consumers and the press alike. Unlike standard ECC, on-die ECC primarily aims to improve yields at advanced process nodes,
    thereby allowing for cheaper DRAM chips. On-die ECC only detects errors if they take place within a cell or row during refreshes. When the data is moved from the cell to the cache or the CPU, if there’s a bit-flip or data corruption, it won’t be
    corrected by on-die ECC. Standard ECC corrects data corruption within the cell and as it is moved to another device or an ECC-supported SoC."

    Read more here to notice it:

    https://www.hardwaretimes.com/ddr5-vs-ddr4-ram-quad-channel-and-on-die-ecc-explained/


    So i will say that the new DDR5's on-die ECC can only detect errors that occur within a cell or row during refreshes, and that it may not be able to correct errors that occur when data is moved from the cell to the cache or the CPU.

    DDR5's on-die ECC is designed to detect and correct single-bit errors within a memory cell or row during refresh operations. This is accomplished by adding extra bits to the memory data that are used to detect errors. If an error is detected, the on-die
    ECC mechanism can correct it by using the extra bits to identify and correct the erroneous bit.


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From H e@21:1/5 to All on Tue May 23 21:05:24 2023
    Is Your head still on ?????????????????????

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)