

Milvus documentation has a nice example: link. After this, you just need to use a persistent Milvus DB, instead of the ephimeral one in the documentation.
Let me know if you have further questions.
Milvus documentation has a nice example: link. After this, you just need to use a persistent Milvus DB, instead of the ephimeral one in the documentation.
Let me know if you have further questions.
OP can also use an embedding model and work with vectorial databases for the RAG.
I use Milvus (vector DB engine; open source, can be self hosted) and OpenAI’s text-embedding-small-3 for the embedding (extreeeemely cheap). There’s also some very good open weights embed modelsln HuggingFace.
Side note: RustDesk has mobile client as well.
I manage some servers and awk can be useful to filter data. If you use commands like grep
, and use the pipe operator (the " | " command), awk
can be very handy.
Sure, a Python script can do that as well, but doing a one-liner in Bash is waaay faster to program.
For the media server, I recommend taking a look at Jellifyn. If you want some fancy statistics use, also give it a look at a Prometheus+Graphana config.
Even if it’s not the case, I found the console installer to be surprisingly easy.
Based on your comment, I understand that AMD GPUs aren’t capable of 8k@60 over HDMI. Is that the case? If so, why?
The weird thing is that she seems to be an actual person, if you check the socials she links in her DM. Kind if fascinating.
Do you know how can I migrate my Firefox data to Librewolf? I think my main concern is history and then open tabs.
The last time (some 7 months ago) something like happened it ended up being that the Steam survey had a bug that oversampled Chinesse machines. Maybe something similar happened.
The 1.5B version that can be run basically on anything. My friend runs it in his shitty laptop with 512MB iGPU and 8GB of RAM (inference takes 30 seconds)
You don’t even need a GPU with good VRAM, as you can offload it to RAM (slower inference, though)
I’ve run the 14B version on my AMD 6700XT GPU and it only takes ~9GB of VRAM (inference over 1k tokens takes 20 seconds). The 8B version takes around 5-6GB of VRAM (inference over 1k tokens takes 5 seconds)
The numbers in your second link are waaaaaay off.
You missed the entire point of the comment by incorrectly calling my country “Columbia”. I don’t even know what to say.
Let’s not waste any of our time and go ask in any Venezuelan forum about the topic.
It is not my intention to be rude. I’m from Colombia, follow Venezuela’s status closely (from media on a broad range of the political spectrum) see Venezuelan emmigrants daily and have met quite a few Venezuelans, and yet Lemmy is the only place I’ve ever seen with people really convinced that Venezuelans love Maduro, and the current situation of the country is because of the sanctions.
It feels almost surreal, and reminds me when some people on Reddit were convinced they knew better than me what’s my country’s political status, all while mistakenly calling the country “Columbia”.
I’m not trying to argue that you should blindly trust my opinions here, but really, really, Venezuela is in a bad spot, nobody likes Maduro’s dictatorship, and the sanctions are not the main causes of any of that (but they do help). Either that or somehow almost everybody in whole Latin American has a very biased opinion from first-hand experiences, and only people from other continents can see that.
Yeah, the rigged ones lol. There’s even mathematical evidence of it being rigged, with votes accounting for exact percentages with just 2 decimal places, for every single candidate.
Venezuela hasn’t publish the official acts, nor let international observers be present in the elections. There was heavy repression on elections day as well, plus some offices not letting people vote.
It is the poorer population that suffers the most. That’s the reason Venezuela has such a big emigration crisis, and every latinamerican country has also seen such a massive influx of poor emigrants. I experience this firsthand, almost daily.
It is not rich people that the militia constantly murders/kidnap.
Absolutely, yes.
I kinda doubt that will happen. For instance, look at Venezuela: Venezuelans are beyond fed up with Maduro’s dictatorship, but there’s nothing they can do against the government forces.
Governments will do anything they can to prevent a paradigm change.
Not accurate. Pi needs to be a normal number for that to happen, something yet to prove/disprove.
You can, it’s literally the way the number is defined.
It would work the same way, you would just need to connect with your local model. For example, change the code to find the embeddings with your local model, and store that in Milvus. After that, do the inference calling your local model.
I’ve not used inference with local API, can’t help with that, but for embeddings, I used this model and it worked quite fast, plus was a top2 model in Hugging Face. Leaderboard. Model.
I didn’t do any training, just simple embed+interference.