ReplaceFile
exists to get everyone else’s semantics though?
ReplaceFile
exists to get everyone else’s semantics though?
Related, note that division is much slower than multiplication.
Instead of:
n / d
see if you can refactor it to:
n * (1.0/d)
where that inverse can then be hoisted out of loops.
This is about the one thing where SQL is a badly designed language, and you should use a frontend that forces you to write your queries in the order (table, filter, columns) for consistency.
UPDATE table_name WHERE y = $3 SET w = $1, x = $2, z = $4 RETURNING *
FROM table_name SELECT w, x, y, z
Obviously the actual programs are trivial. The question is, how are the tools supposed to be used?
So you say to use deno
? Out of all the tutorials I found telling me what tools to use, that wasn’t one of them (I really thought this “typescript” package would be the thing I was supposed to use; I just checked again on a hot cache and it was 1.7 seconds real time, 4.5 seconds cpu time, only 2.9 seconds if I pin everything to a single core). And I swear I just saw this week, people saying “seriously, don’t use deno”. It also doesn’t seem to address the browser use case at all though.
In other languages I know, I know how to write 4 files (the fib library and 3 frontends), and compile and/or execute them separately. I know how to shove all of them into a single blob with multiple entry points selected dynamically. I know how to shove just one frontend with the library into a single executable. I know how to separately compile the library and each frontend, producing 4 separate artifacts, with the library being dynamically replaceable. I even know how to leave them as loose files and execute them directly (barring things like C). I can choose between these things all in a single codebase, since there are no hard-coded project filenames.
I learned these things because I knew I wanted the ability from previous languages I’d learned, and very quickly found how the new language’s tools supported that.
I don’t have that for TS (JS itself seems to be fine, since I have yet to actually need all the polyfill spam). And every time I try to find an answer, I get something that contradicts everything I read before.
That is why I say that TS is a hopelessly immature ecosystem.
I’m not concerned about the Microsoft’s involvement. TypeScript shows an immature tooling ecosystem even on its own merits.
I posted some of my concerns earlier, along with a basic problem challenge (that I can easily do in many other languages) that nobody managed to solve: https://programming.dev/comment/2734178
It’s because unicode
was really broken, and a lot of the obvious breakage was when people mixed the two. So they did fix some of the obvious breakage, but they left a lot of the subtle breakage (in addition to breaking a lot of existing correct code, and introducing a completely nonsensical bytes
class).
I’ve only ever seen two parts of git that could arguably be called unintuitive, and they both got fixes:
git reset
seems to do 2 unrelated things for some people. Nowadays git restore
exists.a..b
and a...b
commit ranges in various commands. This is admittedly obscure enough that I would have to look up the manual half the time anyway.man git foo
didn’t used to work unintuitive I guess.The tooling to integrate git submodule
into normal tree operations could be improved though. But nowadays there’s git subtree
for all the people who want to do it wrong but easily.
The only reason people complain so much about git is that it’s the only VCS that’s actually widely used anymore. All the others have worse problems, but there’s nobody left to complain about them.
Python 2 had one mostly-working str
class, and a mostly-broken unicode
class.
Python 3, for some reason, got rid of the one that mostly worked, leaving no replacement. The closest you can get is to spam surrogateescape
everywhere, which is both incorrect and has significant performance cost - and that still leaves several APIs unavailable.
Simply removing str
indexing would’ve fixed the common user mistake if that was really desirable. It’s not like unicode
indexing is meaningful either, and now large amounts of historical data can no longer be accessed from Python.
The problem with mailing lists is that no mailing list provider ever supports “subscribe to this message tree”.
As a result, either you get constant spam, or you don’t get half the replies.
Unfortunately both of those are used in common English or computer words. The only letter pairs not used are: bq, bx, cf, cj, dx, fq, fx, fz, hx, jb, jc, jf, jg, jq, jv, jx, jz, kq, kz, mx, px, qc, qd, qg, qh, qj, qk, ql, qm, qn, qp, qq, qr, qt, qv, qx, qy, qz, sx, tx, vb, vc, vf, vj, vm, vq, vw, vx, wq, wx, xj, zx.
Personally I have mappings based on <CR>
, and press it twice to get a real newline.
The problem is that there’s a severe hole in the ABCs: there is no distinction between “container whose elements are mutable” and “container whose elements and size are mutable”.
(related, there’s no distinction for supporting slice operations or not, e.g. deque
)
True, speed does matter somewhat. But even if xterm
isn’t the ultimate in speed, it’s pretty good. Starts up instantly (the benefit of no extraneous libraries); the worst question is if it’s occasionally limited to the framerate for certain output patterns, and if there’s a clog you can always minimize it for a moment.
Speed is far from the only thing that matters in terminal emulators though. Correctness is critical.
The only terminals in which I have any confidence of correctness are xterm
and pangoterm
. And I suppose technically the BEL-for-ST extension is incorrect even there, but we have to live with that and a workaround is available.
A lot of terminal emulators end up hard-coding a handful of common sequences, and fail to correctly ignore sequences they don’t implement. And worse, many go on to implement sequences that cannot be correctly handled.
One simple example that usually fails: \e!!F
. More nasty, however, are the ones that ignore intermediaries and execute some unrelated command instead.
I can’t be bothered to pick apart specific terminals anymore. Most don’t even know what an IR is.
I guess I forgot to mention the other implicit difference in concerns:
When you are a game, you can reasonably assume: I have the user’s full focus and can take all the computing resources of their device, barring a few background apps.
When you are an application, the user will almost always have several other applications running to a meaningful degree, and those eat into available resources (often in a difficult-to-measure way). Unfortunately this rarely gets tested.
I’m not saying you can’t write an app using a game toolkit or vice versa, but you have to be aware of the differences and figure out how to configure it correctly for your use case.
(though actually - some purely-turn-based games that do nothing until user enters input do just fine on app toolkits. But the existence of such games means that game toolkits almost always support some way of supporting the app paradigm. By contrast, app toolkits often lack ready support for continuous game paradigms … unless you use APIs designed for video playback, often involving creating a separate child “window”. Actual video playback is really hard; even the makers of dedicated video-playing programs mess it up.)
The problem with XCB is that it’s designed to be efficient, not easy. If you’re avoiding toolkits for some reason, “so what if I block the world” may be a reasonable tradeoff.
There’s tends to be one major difference between games and non-game applications, so toolkits designed for one are often quite unsuitable for the other.
A game generally performs logic to paint the whole window, every frame, with at most some framerate-limiting in “paused” states. This burns power but is steady and often tries hard to reduce latency.
An application generally tries to paint as little of the window as possible, as rarely as possible. Reducing video bandwidth means using a lot less power, but can involve variable loads so sometimes latency gets pushed down to “it would be nice”.
Notably, the implications of the 4-way choice between {tearing, vsync, double-buffer, triple-buffer} looks very different between those two - and so does the question of “how do we use the GPU”?
1, Don’t target X11 specifically these days. Yes a lot of people still use it or at least support it in a backward-compatible manner, but Wayland is only increasing.
2, Don’t fear the use of libraries. SDL and GTK, being C-based, should both be feasible from assembly; at most you might want to build a C program that dumps constants (if -dM
doesn’t suffice) and struct offsets (if you don’t want to hard-code them).
Even logging can sometimes be enough to hide the heisgenbug.
Logging to a file descriptor can sometimes be avoided by logging to memory (which for crash-safety includes the possibility of an mmap’ed file, since the kernel will just take care of them as long as the whole system doesn’t go down). But logging from every thread to a single section of memory can also be problematic (even without mutexes, atomics can be expensive and certainly have side-effects) - sometimes you need a separate per-thread log, and combine in the log-reader tool.
I don’t remember the last time I used ctrl-C. It’s always select or "+y
.
From my experience, Cinnamon is definitely highly immature compared to KDE. Very poor support for virtual desktops is the thing that jumped out at me most. There were also some problems regarding shortcuts and/or keyboard layout I think, and probably others, but I only played with it for a couple weeks while limited to LiveCD.