Hey, Lemmies! I’ve been pondering an idea to enhance our automod system, and I’d love to get your input. LLMs have proven to be quite adept at sentiment analysis, consistently delivering accurate results. Here’s what I’m thinking: if we provide the LLM with a set of instance rules and feed it a message, we could ask it whether or not the message adheres to those rules. This approach has the potential to create a robust automod that works effectively in most cases. What are your thoughts on this? Let’s discuss and explore the possibilities together!

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    While it’s a great theory, people who want to use those bad words/concepts have been working around language filters for ages. Anti-vaxers will call them Vitamins for example.