Any amateur programmers write down todo lists in their dreams and forget them when they wake up? Hate when that happens.
I write linked lists
Friend just failed an Amazon coding test because he didn't know what BST/TBTs we're and iterated using nested for loops lol
fuck off ■■■■■■...
Amateur Programmer Thread
When your bar is high enough that Little People can barely walk under it
I honestly believe you are the more professional programmer here
Saw this and was disappointed that it wasn't a code review
Now Let's See Paul's PR
Seeing as I don't have a lobsters account, I will be submitting things I enjoyed in this thread.
My test takes "[info] Run completed in 5 minutes, 58 seconds." and I can't figure out any other way to do it. (that is, the other ways I've attempted to do it have failed due to user skill).
When that happened to me I found out our app reran a longass test suite setup before each section of the test suite (and some individual setup that was brittle/unoptimal) when it should've ran certain things once, leveraged mocks following e2e tests, maybe used a better test db solution
My diagnosis of the problem is that poking individual bits into 128-bit vector lane isn't fast when you're emulating(?) hardware.
Either one large UInt of arbitrary width (say, a 128-bit UInt) or one Vec(128, Bool) is the most logical solution for what I want.
Within the 128-bit UInt solution, I couldn't manage to write a function that say, took a sequence of values
Seq.tabulate(128/16) { i => i * 3 }
and then concatenated them all into a single 128-bit value (I presume at some point the Integer would go onto the heap).
So I did the opposite, mentioned above.
it should "properly compute ADD operations." in {
test(new BankALU(config = config)) { dut =>
{
(0 until n).map { i =>
val value1 = i
val value2 = i + 1
(0 until config.operationWidth).map { bitindex =>
val idx = (i * config.operationWidth) + bitindex
dut.io.operandA(idx).poke(mask(value1, bitindex))
dut.io.operandB(idx).poke(mask(value2, bitindex))
}
}
dut.io.operation.poke(ALUOperation.ADD)
dut.clock.step(1)
(0 until n).map { i =>
val value = i + (i + 1)
(0 until config.operationWidth).map { bitindex =>
val idx = i * config.operationWidth + bitindex
dut.io.result(idx).expect(mask(value, bitindex))
}
}
}
}
}
Attempts to cast to a Vec(n, operationWidth)
type just explode because they're supposed to be used within hardware constructs and this is a test function.
Why do I need multiple operationWidths? I don't.
Read the Hwacha source code? or any of the RISC-V processors with SIMD extensions? That'd take days. LOL
Ultimately I have the ability to cast between whatever I want inside hardware modules, so I should just parameterize the IO RegIn types and just work with that in the tests and then throughout real hardware just use a 128W throughout or something and cast when needed.
that is likely the solution, but it's not pleasing.
Are you using treadle?
Nevermind I have no idea what I'm talking about but was just googling to see if I could find anything
I'm assuming you saw this?
As an amateur programmer this always poses a big problem and consumes a lot of time. It's unfortunate, this is not a problem that appears in professional programming.