News Feed
  • DrugHub has agreed to fully refund all users who lost money in the SuperMarket exit scam.  
  • Retro Market has gone offline. Circumstances of the closure unknown.  
  • SuperMarket has closed following an exit scam by one of the admins.  
  • The admin of Incognito Market, Pharoah, has been arrested by the FBI several months after exit scamming.  
  • Silk RoadTorhoo mini logo
  • darknet markets list
  • Popular P2P exchange LocalMonero has announced it is closing.  

The Best Way to Evade Linguistic Analysis | llamafile Setup Guide : OpSec | Torhoo darknet markets

The tutorial section of this post assumes a Linux-based operating system, and the presence of common programs like `curl`; however, because llamafile[1] runs on any operating system, you should be able to replicate this anywhere with relative ease!
Good effort post, keep it up!
good stuff, will showcase it in one of my future tutorials. hopefully this works fine inside a whonix workstation VM
/u/OCDProcras
2 points
8 months ago
Will this work on Tails?
/u/RobertRon
1 points
8 months ago
It's pretty difficult run this on Tails : (
/u/FreshBread
1 points
8 months ago
/u/inadahime do you feel that using LLM will put users at greater risk because their message is now recorded by AI servers with all the information that is intended to be encrypted and never exposed to clearnet activity?
/u/inadahime 📢
2 points
8 months ago*
This guide shows you how to run a local LLM - this means that your message is never sent to any server. All the inference happens locally, and you can disable your internet access while using it to prove this.

If you use something like ChatGPT for your stylometric obfuscation despite this guide, it would absolutely put you at greater risk than doing nothing. Local is the way to go, as this post outlines.
/u/FreshBread
2 points
8 months ago
Thanks for clearning that point. You are 100% correct!
/u/[deleted]
1 points
8 months ago
ayo explain local LLMs. is it usable offline? any guide to get it over tor??
/u/inadahime 📢
2 points
8 months ago*
Yes, it's usable offline following the initial setup. To get the model itself over Tor, see ref. 4 and pick a GGUF from the "Files and versions" tab, e.g. gemma-2-2b-it-abliterated-Q6_K.gguf. Then, hit download - the file is 2GB, so it may take a second to fetch over Tor.

To download the two llamafile programs over Tor, prefix each curl invocation with "torsocks." Like this:
# At the time of writing, `0.8.16` was the latest version of llamafile. Change this as needed for future releases.
$ torsocks curl -L -o CounterStylometry.llamafile https://github.com/Mozilla-Ocho/llamafile/releases/download/0.8.16/llamafile-0.8.16
$ torsocks curl -L -o zipalign https://github.com/Mozilla-Ocho/llamafile/releases/download/0.8.16/zipalign-0.8.16
$ chmod a+x CounterStylometry.llamafile
$ chmod a+x zipalign

All of the steps following this should remain the same as in the original post.
/u/[deleted]
1 points
8 months ago
what are llamafiles? explain it for babies
/u/inadahime 📢
2 points
8 months ago
llamafile is a single-executable LLM inference software that runs on nearly any operating system. It's a special format developed by Mozilla that lets you ship a language model file and the inference engine together - basically, llamafiles are programs containing LLMs so that when you type ./whatever.llamafile it runs the LLM and lets you talk to it. In the post, I guide you through the creation of a llamafile that uses the Gemma-2 LLM to anonymise inputs, thus counteracting stylometric analysis.
/u/[deleted]
2 points
8 months ago
very good bro A grade post love to see it
/u/inadahime 📢
1 points
8 months ago
Cheers :-)
/u/cryptopunk69
1 points
6 months ago
You do not want to use any LLM that is not hosted "offline" on your own machine. It will gobble up all of your data and use it to enslave you. You will need a decent GPU to run any low-end models, and a beefy GPU to run the good ones. You can choose to run models only on your CPU but it is terribly slow. Look into
llama.cpp
/u/[deleted]
1 points
6 months ago
is there guides to set it up?
/u/cryptopunk69
1 points
6 months ago
Read the github repository as it should set you up, you can find some other articles as well
/u/HeadJanitor ۩ 𝓜𝓘𝓐 ۩
1 points
8 months ago
Such elaborate and beautiful work by /u/inadahime. Thank you for this marvelous composition.
/u/TheCokeReview
1 points
8 months ago
This is great. Thank you for the contribution.
/u/kepogut
1 points
5 months ago
Using models is very cool. But how do you use the tools you've provided to separate multiple personalities from each other? I understand that when you give instructions to a model you introduce randomness into the model's choice of actions (model temperature) and if it's 0 you'll get the same answers, and if it's 1 you'll get completely different answers.

The question is how to maintain one style for one personality without giving any hint of hiding the style. and how to separate the styles of different personalities if you have 100 of them and maintain each one without giving any hint that I am using methods of protection from linguistic analysis?
/u/EduPurposesOnly 🍼
1 points
3 days ago
i am wondering if anyone has any updates on this topic as the ai landscape moves fast.

i think the effeciencies with deepseek was more in relation to reasoning and the use of synthetic data to improve the model, but have there been any effeciency improvements to any llms that would allow a local model to run on tails and a cpu only?

i want to say it will be possible someday with all the improvements but not sure if we are there now, 1 month away or 1 year away. does anyone know?

i guess the model size is not the issue here but the amount of compute required. what is the variable that makes a model not require as much compute to the point where you don't need to load it into the vram of a gpu anymore?

could someone make an llm specifically for this use case? tailsos, cpu, and only rewriting text?

if i had an llm this could be more concise and made into a single question... update?