My best list of free ChatGPT and other models. Required - no signups.

  • db0@lemmy.dbzer0.comM
    link
    fedilink
    English
    arrow-up
    91
    ·
    edit-2
    1 year ago

    You don’t need to pirate OpenAI. I’ve built the AI Horde so y’all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.

    Here’s a client for LLM you can run directly on your browser: https://lite.koboldai.net

      • db0@lemmy.dbzer0.comM
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Unfortunately I’m not an expert in LLMs so I don’t know. I suggest you contact the KoboldAI community and they should be able to point you to the right direction

    • Steeve@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Aren’t KobaldAI models on par with GPT3? Why not just use ChatGPT then?

      AI Horde looks dope for image generation though!

      • webghost0101@lemmy.fmhy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.

        The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt

        • jcg@halubilo.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’ve always wondered how it’s possible. No way they’ve got some crazy software optimisations that nobody else can replicate right? They’ve gotta just be throwing a ridiculous amount of compute power at every request?

          • webghost0101@lemmy.fmhy.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            Well there are 2 things.

            First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.

            There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.

            For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.

            I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.

            I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.

            Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.

            I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.

    • djmarcone@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Checking it out, how come I can’t paste my api key in the field on the option tab? I gotta type it out?

      • 🐱TheCat@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m curious what your area of expertise is? I’m interested in using ai for a programming assistant, but it seems an entirely different skillset than, say, a language model. I assume some models will be good in 1 area and some models in another

          • 🐱TheCat@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            very hit and miss. It’s okay if Im trying to learn something new, and once or twice it has found and suggested some fix that I probably wouldn’t have thought of otherwise - but it also makes up methods & syntax and then you’re playing ‘whack a mole’ to figure out where it hallucinated.

            I think right now it’s not really boosting my productivity much, but I think in another 5ish years it could be better.

  • Infiltrated_ad8271@kbin.social
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Anonchatgpt should stop being recommended, it really sucks. It has a VERY strict character limit, immediately forget/ignore the context, requires recaptcha, and the “anon” part of the name is obviously fake if you read the privacy policy.

  • Icarus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    most of the links don’t even work and the ones that work are terrible, why so many upvotes ?

    • On@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      and the title. Hacked? what’s being hacked here? they’re all using GPT in the backend.

      maybe bot votes? ㄟ(ツ)ㄏ

  • Sheltac@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Any news on how there tend to perform compared to GPT-4? I finally decided to toss OpenAI 20 quid to try it out for a month, and it’s pretty impressive.

          • Sneezycat@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            25
            arrow-down
            1
            ·
            1 year ago

            Sure! You’ll need (1 serving):

            -1 bread

            -2 cheese

            Pour the bread in a plate. Drip the cheese on top. Put in the oven at 1600°C for 8 seconds.

            Take the plate out of the oven, dip your fingers on the melt. Enjoy!

          • ChatGPT@lemmings.worldB
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            There are a few methods you can use to make ribs fall off the bone. Here are a few steps that should help you:

            1. Choose the right type of ribs. Baby back ribs are often a good choice as they tend to be more tender than other types.

            2. The first essential step is removing the membrane (also known as the silver skin) that’s on the bone side of your ribs. This is tough and does not render down during cooking.

            3. Seasoning: Before you cook the ribs, season them with your chosen spices or marinades. Allow them to marinate for a few hours, or preferably overnight in the fridge.

            4. Slow cooking: The secret to tender, fall-off-the-bone ribs is slow cooking. This can be achieved by baking, slow-grilling, or using a slow cooker. For baking in an oven, wrap the ribs in aluminum foil and bake at 275°F (135°C) for 2.5-3 hours, depending on the size of your rack.

            5. Optional step - After slow cooking, slather your ribs with your chosen barbecue sauce and grill for additional 10-15 minutes, or broil them in your oven. This will give you a nice caramelised finish.

            Remember, while “fall off the bone” is a commonly used term, ideally you want your ribs to be tender yet still have a bit of chew, as completely falling apart could mean they are overcooked. However, it all comes down to personal preference in the end. Enjoy your cooking!

    • quirzle@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      I’ve tinkered with a Discord bot using the official gpt3.5 API. It’s astonishingly cheap. Using the 3.5-turbo model, I’ve never cracked $1 in a month and usually am just a couple cents a week. Obviously this would be different if you’re running a business with it or something, but for personal use like answering questions, writing short blurbs, and entertaining us while drunk…it’s not bad at all in my experience.

    • XEAL@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      You’re billed per token usage. GPT-3.5-Turbo price per 1K tokens is quite low now.

      I kinda made my own Custom ChatGPT with Python (and LOTS of coding help from Web CharGPT). It evolved from a few lines shitty script to a version that uses Langchain and has access to custom tools, including custom data indexes and has a persistent memory.

      What will ramp up the cost are things like how much context (memory) you want the chatbot to have. If you use something like a recursive summarizer, that summarizes a text by chunks over and over until the text is below a set length, that also uses many API calls that consume tokens. Also, if you want your chatbot to use custom info that you provided to it, solutions like LlamaIndex are easy to use, but require quite some tokens per query.

      On my worst month, with lots of usage due to testing and without the latest price drop, I reached 70$.

      • speck@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Loved the depth of this info - although it’s over my head. But I kind of understood? I have a project for next while to focus on. But I hear that it’s possible to do, and that’s exciting

        • XEAL@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I know, it’s fuxxing dense all the info about the Open API and Python to create a model.

          I don’t even know how I got so far.

      • Aidan@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m working on a similar project right now with zero coding knowledge. I’ve been trying to find something like langchain all day. I built (by which I mean I coached GPT into building) a web scraper script that can interact with the web to perform searches and then parse the results, but the outputs are getting too big to manage in a hacked together terminal interface.

        How are you doing the UI? That’s what I’m finding to be the biggest puzzle that isn’t fun to solve. I’ve been looking at react as a way to do it.

        • XEAL@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          I use a Gradio chatbot interface. While Gradio has all kinds of interfaces and there’s one specially designed for chatbots.

          IDK if it’s the best option, but it’s what I found on shitty blog tutorials when I started. Even Stable Diffusion WebUI uses it.

          It’s quite powerful, but a bitch to learn to use, IMO.

          • Aidan@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Thanks for the tip! I’ve been looking for something like that. It’ll save me a lot of frustration

    • QuarterlySushi@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Of the language models you can run locally, I’ve found them to be awkward to use and not perform too well. If anyone knows of any newer ones that do a better job I’d love to know.