Oh sorry, nvidia RTX :) Thanks!
Oh sorry, nvidia RTX :) Thanks!
Lowest price on Ebay for me is 290 Euro :/ The p100 are 200 each though.
Do you happen to know if I could mix a 3700 with a p100?
And thanks for the tips!
And still there are other people than you who want to do that full-time - and in doing so provide, at least for me, more value than the 6ooth marvel billion dollar movie.
There are educators and entertainers out there who chose this as a job and are good at it. If they could live off of it by going the patreon route instead of the shitty YouTube ad spam one I’d be all for it.
If that’s your intend than it might be better to pick individual arch wiki pages or improve the entry documentation. Many people refer to there from all distro because of its volume.
A “how to read tech documentation” could add value for this target group.
User perspective:
If you want something big I’d pitch nixos. As in the core distribution. It’s a documentation nightmare and as a user I had to go over options search and then trying to figure out what they mean more often than I found a comprehensive documentation.
That would be half writing and half coordinating writers though I suspect.
Another great project with mixed quality documentation is openhab. It fits the bill of more backend heavy side and the devs are very open in my experience. I see it actually as superior in its core concepts to the way more popular home assistant in every aspect except documentation!
That said: thanks for putting the effort in! ♥
That’s still like a hundred to one or even way worse. We can simply shove in (group reader doesn’t like) until they are so full that they can’t move any more and then pile on each individually and still have a few billion people preparing the lion BBQ for afterwards.
The numbers gap is ridiculously huge!
Hm du machst gerade Werbung für “vielleicht doch ein bisschen den messenger erschießen, die Botschaft tut nämlich weh auch wenn sie irgendwie wahr ist.”
Nicht ernst gemeint, Geschwister im Leiden und so ♥
Als ehemaliges Ostkind: Bin keiner der upvotees sehe aber ein strukturelles Problem, dass durch den " Aufbau Ost" gefördert wurde: Infrastruktur push ohne die Bevölkerung mit zu nehmen.
Insbesondere Bildung und die schlicht anderen Bedürfnisse einer stärker verstreuten Bevölkerung haben nie Schritt gehalten. Meine Faustregel :Kohl hat alles gepushed vor dem man ein Foto machen konnte. Da gehörten die Menschen nicht dazu…
Das gleiche steht uns im deutschen Westen auch bevor. (mittlerweile? Muss mir mal die Daten angucken, wie sich invest in Infrastruktur vs Bildung vs inflation entwickelt. Hebe ich mir aber auf für wenn ich besser gelaunt bin).
Einen Ansatz, wie wir da wieder raus kommen hab ich leider keine Idee. Die Populisten haben sehr geschickt jegliche Dissenz als "elitär’ oder ‘Establishment’ gemarkt.
The brackets were implied by OP.
I.e. “one way: (be ignorant || approve)”.
Which of those questions from the article would you describe as loaded enough to imply the quite interesting responses?
I expected to read something like “why are Chinese people stupid?” and then some racist shit - but the answers to those questions are… Interesting.
The bankruptcy scenario is correct but the first part isn’t: you don’t have X shares as collateral that you can liquidate. Instead, you have collateral to cover sum Y.
As long as the collateral contract covers enough stock positions the bank won’t lose.
That said all of this is assuming standard contracts. If y bank wrote “0% interest and instead 50% of the revenue growth of Twitter” then this would be an easy way to lose money.
Haven’t heard of a stupid banker yet, though, so what would the chances be?
Ah that would make sense, thanks!
I haven’t found (while cross reading ) details about why the “highly improved” didn’t make it to upstream openwrt?
According to their page it’s a pure searxng instance. I didn’t see anything on my own instance changing so there are three options I see:
And then there’s the obligatory “none or all of the above”.
Personally I’d guess it’s just a fluke. I gave it a few searches from Firefox mobile on “all languages” and had a mix of mainly English and a bit of German und French in there as results.
Edit: if you’re comfortable with that feel free to share some search terms and we can compare results. Would be curious myself!
The screenshot had has the criteria included though. Relevant part: either be for children or for everyone.
I use lemmy in two ways: Whitelist: show me my subscriptions and only those (subscribed) Or blacklisted: show me everything else except the things I want to never see.
The latter lead me to this thread! It’s two different experiences for me and I get a bit out of my interest bubble from time to time.
Because it’s basically axiomatic: ssh uses all keys it knows about. The system can’t tell you why it’s not using something it doesn’t know it should be able to use. You can give a -i for the certificate to check if it doesn’t know it because the content is broken or the location.
That said: this doesn’t make -v more useful for cases like this, just because there’s a reason!
I’d try chat gpt for that! :)
But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I’m not familiar with it if there is.
First, familiarize yourself with server setups. If you only want to host this you won’t have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.
One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I’ve tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.
Once you can understand this whole port, container, path mapping and environment variable things.
Then it’s going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.
No offense intended, possible that I miss read your experience level:
I hear a user asking developer questions. Either you go the route of using the publicly available services (dalle and Co) or you start digging into hosting the models yourself. The page you linked hosts trained models to use in your own contexts, not for a “click button and it works”.
As a starting point for image generation self hosting I suggest https://github.com/AUTOMATIC1111/stable-diffusion-webui.
For the training part, I’ll be very blunt: if you don’t indent to spend five to six digit sums on hardware or processing power, forget it. And even then you’d need the raw training data to pull it of.
Perhaps what you want to do use fine tune a pretrained model, that’s something I only have a. It of experience in LLMs thohfn(and even there I don’t have the hardware to get beyond a personal proof of concept).
Nah, you’re doing the right thing: getting input when not sure. That’s the way of learning!
Only one request: add the thoughts from this answer to the OP the next time please! Would make reading it a bit easier and better framed, at least for me.
(I.e. “I’m an authority in this field, look at this exciting news!” VS “my bullshit sensors tingle but I don’t know enough. What are your thoughts?”