• AI for FPGA design

    From john larkin@21:1/5 to All on Sat Aug 9 07:09:12 2025
    XPost: sci.electronics.design

    It seems to be happening, according to AI.

    It would be cool to design FPGAs at a higher level than VHDL or
    Verilog.

    https://www.campusreform.org/article/gen-z-right-facing-big-problem-no-previous-generation-encountered/28270

    Recent grads are complaining that they can't find jobs because of AI.
    One estimate I've seen (from an EE student) is that 95% of EE grads
    are coders, not circuit designers. So far, AI doesn't seem to be very
    good at analog circuit design.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Niocl=C3=A1is=C3=ADn_C=C3@21:1/5 to All on Sun Aug 10 21:20:33 2025
    XPost: sci.electronics.design

    Dear Mister Larkin:

    Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI
    for helps with Verilog - not to avoid Verilog.

    A lady claims via LinkedIn that an AI service produced a bad Verilog code,
    so she concluded that an AI is not going to threaten her job, and I wrote
    to her that she deserves a refund.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From john larkin@21:1/5 to Spamassassin@irrt.De on Sun Aug 10 13:32:36 2025
    XPost: sci.electronics.design

    On Sun, 10 Aug 2025 21:20:33 +0200, Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> wrote:

    Dear Mister Larkin:

    Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI
    for helps with Verilog - not to avoid Verilog.

    A lady claims via LinkedIn that an AI service produced a bad Verilog code,
    so she concluded that an AI is not going to threaten her job, and I wrote
    to her that she deserves a refund.

    Then why produce Verilog code?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Niocl=C3=A1is=C3=ADn_C=C3@21:1/5 to Spamassassin@irrt.De on Mon Aug 11 01:06:03 2025
    XPost: sci.electronics.design

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    On Sun, 10 Aug 2025, john larkin wrote:
    "On Sun, 10 Aug 2025 21:20:33 +0200, Nioclåisín Cóilín de Ghlostéir <Spamassassin@irrt.De> wrote:

    Dear Mister Larkin:

    Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI
    for helps with Verilog - not to avoid Verilog.
    [. . .]

    Then why produce Verilog code?"


    Dear Mister Larkin:

    Well, I favor VHDL! However, as for Ben and an AI, Ben reports that an AI makes mistakes. I suggest reading what he writes about it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Sloman@21:1/5 to john larkin on Mon Aug 11 13:58:20 2025
    XPost: sci.electronics.design

    On 11/08/2025 6:32 am, john larkin wrote:
    On Sun, 10 Aug 2025 21:20:33 +0200, Nioclåisín Cóilín de Ghlostéir <Spamassassin@irrt.De> wrote:

    Dear Mister Larkin:

    Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI
    for helps with Verilog - not to avoid Verilog.

    A lady claims via LinkedIn that an AI service produced a bad Verilog code, >> so she concluded that an AI is not going to threaten her job, and I wrote
    to her that she deserves a refund.

    Then why produce Verilog code?

    True. Programmers should write everything in hex code, rather than using
    the crutch of assembler or some even higher level language.

    It wouldn't help their productivity, and it would make it even harder
    for the people maintaining the product to work out which segment of code
    or chunk of logic was actually doing what, but at least you know what's actually going on, even if you can't work out what it was intended to be
    doing, or why.

    --
    Bill Sloman, Sydney

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Spamassassin@irrt.De on Mon Aug 11 00:36:01 2025
    XPost: sci.electronics.design

    "Niocláisín Cóilín de Ghlostéir" <Spamassassin@irrt.De> wrote in message news:ff700ae7-08a7-bf40-f29a-69c44bd31ae7@irrt.De...
    Dear Mister Larkin:

    Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI

    https://www.perplexity.ai/

    How many t's are there in Stuttgart.

    Hmm.

    for helps with Verilog - not to avoid Verilog.

    A lady claims via LinkedIn that an AI service produced a bad Verilog code, so she concluded that an AI is not going to threaten
    her job, and I wrote to her that she deserves a refund.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Niocl=C3=A1is=C3=ADn_C=C3@21:1/5 to john larkin on Mon Aug 11 11:25:31 2025
    XPost: sci.electronics.design

    On Mon, 11 Aug 2025, Bill Sloman wrote:
    "On 11/08/2025 6:32 am, john larkin wrote:
    [. . .]

    Then why produce Verilog code?

    True. Programmers should write everything in hex code, rather than using the crutch of assembler or some even higher level language."


    Dear Mister Sloman,

    I believe that what Mister Larkin is getting at here, is that he wants to
    use an AI at a higher level than Verilog, so Mister Larkin is perplexed as
    to why Ben Cohen advocates an electronics worker to both use Perplexity AI
    to produce Verilog code and to continue manually writing in Verilog.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Niocl=C3=A1is=C3=ADn_C=C3@21:1/5 to All on Mon Aug 11 11:17:28 2025
    XPost: sci.electronics.design

    On Mon, 11 Aug 2025, Edward Rawde wrote:
    "> Ben Cohen posts many LinkedIn posts via which he promotes Perplexity AI

    https://www.perplexity.ai/

    How many t's are there in Stuttgart."


    Dear Mister Rawde:

    Thanks for exposing this shortcoming. I will inform Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Niocl=C3=A1is=C3=ADn_C=C3@21:1/5 to All on Mon Aug 11 12:29:17 2025
    XPost: sci.electronics.design

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    I wrote yesterday:
    "A lady claims via LinkedIn that an AI service produced a bad Verilog
    code, so
    she concluded that an AI is not going to threaten her job, and I wrote to her that she deserves a refund."


    Dear all:

    "User Agreement
    Effective on November 20, 2024
    [. . .]
    8.2. Don’ts
    You agree that you will not:
    [. . .]
    4. Copy, use, display or distribute any information (including content) obtained from the Services, whether directly or through third parties
    (such as search tools or data aggregators or brokers), without the
    consent of the content owner (such as LinkedIn for content it owns);"
    says
    HTTPS://WWW.LinkedIn.com/legal/user-agreement#dos

    I asked Ms. Sharada Yeluri for permission to republish from that LinkedIn thread. She likes this question, so I republish . . .

    "Sharada Yeluri
    ‱ 3:e+Premium ‱ 3:e+
    Engineering Leader
    6 mĂ„n [months ago i.e. circa February 2025] ‱ Redigerad [Swedish for edited] ‱ 6 mĂ„nader sedan [months ago] ‱ Redigerad ‱ Synligt för alla, pĂ„
    och utanför LinkedIn

    Följ [Follow]

    ChatGPT o1 with advanced reasoning
 excels at competition-level math,
    solves PhD-level science questions, tackles complex multi-step problems
    with chain-of-thought reasoning
 The list goes on.

    Curious about its prowess, I decided to test its ability to develop
    Verilog RTL code for a functional block that’s commonly found in most networking and XPU ASICs: a buffer manager. After all, they charge $200
    per month, so there must be some magic.

    Grudgingly, I paid the fee and posed a challenge: Build a buffer manager
    for a 16K entry-deep buffer that is 128 bits wide, shared dynamically
    between 256 queues. The module should sustain one enqueue and one dequeue every cycle without stalls... Use SRAMs for linked list structures, and
    yes, the SRAMs have two-cycle read latencies...

    I know there aren’t many open-source Verilog designs for hashtag#ChatGPT
    to learn from. Still, with its "advanced" reasoning abilities, I expected
    a decent output.

    It churned out an RTL module and a Verilog test bench—points for effort. When I pointed out how the design could not handle back-to-back dequeues
    from the same queue, it gave up too quickly and declared there was no way
    to design it without stalling the inputs. I nudged it towards approaches
    like doubly linked lists or ping-pong buffers. It understood the concepts
    and even explained them back to me, like a student trying to impress a professor... 😊

    When the RTL didn’t give the correct results, I directly fed back the simulation results from its test bench for it to analyze. After a few
    feedback iterations, the enqueues started working—progress!

    The dequeues, however, remained stubbornly broken. Hoping to simplify
    things, I relaxed the constraints, allowing a 5-cycle gap. No luck...
    Instead, ChatGPT decided the simulator was wrong—an audacious claim for
    an AI model still learning to count pipeline stages.

    Eventually, I debugged the RTL myself and found the culprit - a typo.
    After fixing it, the dequeues worked. However, the design still lacked
    hazard checks for back-to-back dequeues, and after an hour of trying to
    teach pipeline bypasses, I called it quits.

    The good news? đŸ€”

    While the ChatGPTs and copilots might take over sw engineer jobs, they
    are far from snatching jobs from ASIC engineers
 😊

    They may argue about the lack of open-source Verilog for AI models to
    train on - chip designs are locked away tighter than bank vaults. But if ChatGPT can solve Olympiad math through reasoning, why does reasoning
    through pipeline hazards feel like rocket science to it? đŸ€”

    The pace of innovation needed to achieve hashtag#AGI is directly tied to advancements in XPUs and the networking hardware they rely on. If AI
    companies are serious about accelerating AGI development, we need models
    that can reason through complex chip design problems and help compress
    design cycles. After all, these chips are the foundation for their AGI
    dreams.

    hashtag#OpenAI team, now that the Olympiad math is behind you, how about
    the chip design challenge next?"

    "Sharada Yeluri

    Författare [LinkedIn Original Poster]
    Engineering Leader
    6 mÄn

    Andreas Olofsson, 100% agree. But again, the idea behind reasoning models
    is that they work well even in the absence of tons of data during
    training. The model seems to understand all the Verilog syntax and can
    spit out hundreds of lines of code that compiles well. when I explain pipelining concepts, it understands and repeats back it's interpretation
    with examples. It almost felt like I was talking to a new college grad.
    But, it fell short of actually implementing the concepts back in Verilog.
    It probably needs fine-tuning during the training phase with examples
    where the feedback from the simulation can be used to train the models.
    Just thinking out loud."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Gaurav Vaid , hmm.. interesting thoughts."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Rob Sprinkle, I haven't used Haskell personally. So, I won't be able to comment on it. I think the quality of the code improved a lot from the
    first pass to when I finally jumped in. It actually does learn when you
    teach new concepts. For example, when I told her that the pipeline names
    were all messed up and it should use strict suffixes like _p0, _p1, etc.
    to distinguish between the pipeline stage signals, it rewrote the code so
    well that it eventually made it easy for me to debug. If we have to
    intervene from the beginning, it defeats the purpose IMO.."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Ivan Djordjevic , reasoning models, as claimed by openAI, are supposed to
    be more intelligent than parrots :)"

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    From Open AI: " The models use a sophisticated chain-of-thought reasoning process, allowing them to break down intricate problems into manageable steps." My experiment aims to see if the model can solve the problem on
    its own. Even then, I broke it down step by step, simplified the problem several times, asked it to reset and start over, etc., but I just could
    not get it to solve pipeline hazards. If you have better luck, do let me know."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Varun Uniyal I don’t think chipNemo can solve this. But I could be wrong. Try ir out
"

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Rajesh Parikh, Great thoughts. Maybe, in addition to more data during training, these domain-specific models also need access to verilog
    simulators and test benches written by either humans or other models
    during training, as well as inference."

    "Paul Colin Gloster
    ‱ Du [Thou in Swedish - i.e. I]
    Researcher at Universidade de Coimbra
    6 mÄn

    Dear Sharada Yeluri: Happy New Year! Demand a refund!" She finds this
    comment to be funny. I seriously mean it.

    "Sharada Yeluri

    Författare
    Engineering Leader
    (redigerat)
    6 mÄn

    Debajyoti Pal, I understand your concerns. But think about it this way:
    around 35-40 years ago, people used to hand-draw the schematics of logic
    gates for their chips. There was no concept of using an EDA tool to
    synthesize the gates. When the EDA tools came out, a lot of design
    engineers protested that the tools did not know how to come up with an area-efficient netlist that also meets timing and argued that we should
    still rely on hand-drawn logic gates for high-speed datapath. I remember
    at Sun Microsystems, we used to have special teams to do datapath design
    where the logic to gates was done manually, and engineers did manual P&R. Gradually, as the tools became better at what they do, we started
    trusting them for all digital logic design. The reason the tools got more advanced is that EDA vendors built feedback systems where the timing is
    fed back to make the synthesis and P&R better. LEC and formal methods
    ensured that the netlist is functionally equivalent to RTL, etc. I see
    the same transition that will eventually happen again, with tools
    generating RTL from high-level specs using advanced reasoning and humans
    using other verifier tools to ensure the generated RTL can be used. It is
    a matter of when not if. IMO"

    "Sharada Yeluri

    Författare
    Engineering Leader
    (redigerat)
    6 mÄn

    Dawei Wang, Nice to hear from you. I did what you suggested to some
    extent. The Verilog-based TB and RTL are both generated by ChatGPT, and I
    was feeding back the result from the TB as is to ChatGPT so that it can fine-tune the RTL until it works."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Raja Ramkaran Reddy Rudravaram, Thank you. Yes, I will try the RAG one
    next."

    "Srinivas Lingam, thanks. I agree that model developers haven't yet
    focused on enhancing the model's capabilities for chip design challenges.
    Most are probably giving up too soon with the excuse that they can't find enough data. Even with limited data, can the model developers use the
    same RL techniques they have used for math to improve the models for RTL coding? Could they use Verilog simulators as verifiers during
    post-training fine-tuning? This, combined with agentic workflows (where
    the generated RTL is continuously checked with simulation by the agents
    and fed back to the model until it converges), could probably yield good results. I hope to see more innovation on this front."

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Saurabh Chakraborty , did you try this challenge?"

    "Paul Colin Gloster
    ‱ Du
    Researcher at Universidade de Coimbra
    1 sek

    Dear Mister Patrick Lehmann: Demand a refund! Limited LinkedIn does not
    allow having more than one reaction icon set. I set that comment by you
    to insightful, and I also want to set it to funny!"

    "Sharada Yeluri

    Författare
    Engineering Leader
    6 mÄn

    Sreenivas Nandam It is not a syntax error. It has one extra pipeline for
    the valid bur, not for the address, so it could never correctly line up
    the read data with the requester. For some reason, it was not able to
    figure that out."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard@21:1/5 to All on Mon Aug 11 16:57:22 2025
    XPost: sci.electronics.design

    [Please do not mail me a copy of your followup]

    john larkin <jl@glen--canyon.com> spake the secret code <64le9k1vou92tug582k53qhfijm118r68k@4ax.com> thusly:

    It would be cool to design FPGAs at a higher level than VHDL or
    Verilog.

    What about HLS?
    <https://en.wikipedia.org/wiki/High-level_synthesis>

    --
    "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
    The Terminals Wiki <http://terminals-wiki.org>
    The Computer Graphics Museum <http://computergraphicsmuseum.org>
    Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Sloman@21:1/5 to All on Tue Aug 12 16:32:03 2025
    XPost: sci.electronics.design

    On 11/08/2025 7:25 pm, Nioclåisín Cóilín de Ghlostéir wrote:
    On Mon, 11 Aug 2025, Bill Sloman wrote:
    "On 11/08/2025 6:32 am, john larkin wrote:
    [. . .]

    Then why produce Verilog code?

    True. Programmers should write everything in hex code, rather than using the crutch of assembler or some even higher level language."


    Dear Doctor Sloman,

    I believe that what Mister Larkin is getting at here, is that he wants to
    use an AI at a higher level than Verilog, so Mister Larkin is perplexed as
    to why Ben Cohen advocates an electronics worker to both use Perplexity AI
    to produce Verilog code and to continue manually writing in Verilog.

    That may be what he has in mind. He's not a subtle thinker, so he may
    have missed the point that artificial intelligence isn't entirely
    reliable, so the code it produces may not always work the way we'd like
    it to.

    Human designers have the same weakness - it often comes from having to
    solve imperfectly specified problems.

    Most of the requests for help we get here generate a lot of "what are
    you actually trying to do" questions.

    A good bit of real life electronics involves looking at other people's imperfectly document designs and working out how they were intended to
    work, and why they went wrong in some specific unforeseen situation.

    --
    Bill Sloman, Sydney

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From john larkin@21:1/5 to Spamassassin@irrt.De on Tue Aug 12 07:51:02 2025
    XPost: sci.electronics.design

    On Mon, 11 Aug 2025 11:25:31 +0200, Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> wrote:

    On Mon, 11 Aug 2025, Bill Sloman wrote:
    "On 11/08/2025 6:32 am, john larkin wrote:
    [. . .]

    Then why produce Verilog code?

    True. Programmers should write everything in hex code, rather than using the >crutch of assembler or some even higher level language."


    Dear Mister Sloman,

    I believe that what Mister Larkin is getting at here, is that he wants to
    use an AI at a higher level than Verilog, so Mister Larkin is perplexed as
    to why Ben Cohen advocates an electronics worker to both use Perplexity AI
    to produce Verilog code and to continue manually writing in Verilog.

    I'm not perplexed. I just think that there will eventually, maybe
    soon, be better ways to design FPGAs than trying to define parallel
    structures with hacked procedural languages.

    It's the typing vs soldering thing again. Words vs images. Parallel
    execution every clock vs the idea of a program counter executing at
    some one location in a sea of object code.

    When FPGAs were new, we designed with schematics. I design FPGAs on whiteboards now and let other people type the VHDL and do the test
    benches. I suspect that's a common procedure: visual people design architectures and verbal people type it in.

    LabView is awful, but something like that could design FPGAs.

    The benefit of an intermediate VHDL/Verilog step would be code
    portability.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)