Problem/Justification
(What is the problem you are trying to solve with this feature/improvement or why should it be considered?)
There is the Truenas document support system already in place, and feeding it AI created slop will damage an important community support system.
Impact
(How is this feature going to impact all TrueNAS users? What are the benefits and advantages? Are there disadvantages?)
It offers a route to protecting what should be a keystone system of community support from long term holistic degradation and saves repetition explaining when the AI provided answer is wrong.
User Story
(Please give a short description on how you envision some user taking advantage of this feature, what are the steps a user will follow to accomplish it)
âŚit means we can use the forum for human discussion?!
(This post might come across as facetious but its a serious discussion point).
So it is honest that it is very poor in providing terrible advice. That middle weakness is a big red flag.
Were we not saying that if a posting came from an AI, the thread must say so at the top of the posting/reply? I donât think is is a rule yet, but maybe it should be.
I agree that if someone just generates and copy-pastes answer from ChatGPT they dont even understand its bad and I dont promote that. I can ask ChatGPT myself, I dont need anyone to do that for me.
But I just need to say something before this gets to hating anyone using AI as I sometimes see.
When I debug something, try to understand something or eventually create guide I heavily use ChatGPT.
My Incus guide? I heavily used ChatGPT to generate QEMU commands and understand it. So its absolutely created using AI.
Or when i tried to debug Frigate. Again a lot of time in ChatGPT and even used Copilot to create me python script to test different scenarios.
So if I had to tick some box asking âWas this guide/post created using AI?â I would have to check yes because I used AI.
Does that make it AI slop and without any value?
So just remember not to become Luddites.
Content created with help of AI can be both good or bad.
There is a difference between using an AI to synthesize a new, useful and correct guide, which increases the knowledge base of the community, and copy/pasting AI answers.
The former is the way the world is going, creating AI assistants which need to be supervised. The latter is giving lunatics the keys to the asylum.
I was trained in C but at work we have to do some heavy lifting via VBA in excel. I know enough about coding to ask the right questions and tasks that would have taken me days of frustration now finish in a matter of hours thanks to co-pilot.
That does not make co pilot a good tool for everything, but for VBA, itâs been really good to me.
Look like people here have very similar opinions to me on AI. Nice to see we are on the same page
I recently read some AI hating posts on Reddit so I have a little PTSD
We expect people to try to help themselves before posting, right? But today, if I Google something, Iâm likely to get an AI-generated answer at the top of my search results (and thatâs not even considering how many of the indexed pages might be AI-generated). Heck, even on these forums (Iâm including the old one in this), there are plenty of bad answersâor answers that were good for CORE, but a disaster for SCALE (e.g., ZFS filesystem version out of date? | TrueNAS Community).
Bad information is a problem, and more of a problem is users who are unable to distinguish bad information from goodâbut is it really important whether that bad information came from an actual human or an AI?
I think AI answers for people could be improved by using similar method Frigate docs use. https://docs.frigate.video/
They have AI chatbot that pulls data not only from docs but also discussions and Github issues.
This can offer more specific answers than general ChatGPT.
So already answered questions can be used by the chatbot to answer questions before people post on forums.
If this is the system that Truenas decide to go with for first line community support, then there needs to be significant quality controls on the information that the support AI is feeding off, ie strict forum moderation and vetting the quality of the posts (ideally, from a Truenas perspective, by using âdisposableâ community members to test the guides created, but that has downsidesâŚ)
For you? Itâs likely fine. Because you are in a better position to understand what an AI can assist with and thus you understand that you need to verify the output.
For users who are less versed with those caveats the usage of chat robots will result in false âadviceâ being used due to hallucinations and biased training data.
I argue that most people on the planet are relatively non-technical and are not going to really understand this.
One reason that LLMs often work well for programming is that there are accepted approaches to solving a programming problem whereas solving an issue inside TrueNAS can be a lot more nuanced because of the much higher entropy caused by CORE vs. SCALE, Docker vs. Kubernetes, virtio vs. incus, and on and on.
Tasks whose execution order, parameters, etc have not changed much are hence much more likely to be answered correctly by AI - it is so much easier to mimic an expert when 100+ sources consistently point to the same solution.
On the other hand, it is usually the less defined problems and answers that give humans fits (hence the call for Google / AI to help), and the mimicry of AI is less likely to be spot on unless the parameters of the question bound carefully for version dates and include all relevant information.
That to me is the main issue with AI, it gives the user the illusion of a all knowing, competent resource when in fact itâs really important for the user to understand the limitations of AI, and parsing in particular, to limit hallucinations and bound problems properly.
Iâm not very familiar with AI; however, my few interactions with it just gave me subpar results (well, perhaps I just canât correctly write prompts). I think one of the issues is that it is âtryingâ to answer your exact question instead of trying to be an expert in the particular field.
For example, the last time I asked how to better deploy jellyfin, as a container or as a VM, it gave some pros/cons between lxc vs VMs. Which was not connected to jellyfin by any means. This thing didnât even consider transcoding topic. But even when directly pointed out (for transcoding), it spoke of some basics that I could google myself. It didnât bring SR-IOV for VMs, for one.
So you could lead a horse an AI on a topic that youâre already familiar with, but AI could not lead you on a topic you donât know.
My experience with Gemini is crappy too. Same answers as CHATGPT when you tell it that is was wrong. You are correct, thank you for correcting me. Here is the correct answer. And then it is wrong as well.
I do think AI will be much better in the coming years but I hope one is not called âSkynetâ.
Your LLM even states it learns from âTruenas websitesâ, so is it, or is it not, being trained on the ChatGPT and other slop answers that get copy/pasted onto the forum?