I have some suggestions: let’s not make people translate to English unless they are learning English. I don’t want to be thinking about whether “I’m coming Friday” is correct grammar in English. I want to be thinking about my target language!
Thanks for the suggestion, I’ll definitely try to make the app as language inclusive as possible!
Also, sorry if I might’ve been too vague with the post title. The app is just similar to Duolingo in terms of structure and the idea, however it’s not specific to language learning but supposed to cater to any subject, really.
For example, I personally use it to study for my university subjects.
This app seems to be about any generic courses, not just language learning. So someone can make a language course in the way you’ve described
Yeah, it’s my minor pet peeve with Duolingo, like source language and my language doesn’t have/need suffixes like “the” or “a” so I often forget about it, it’s soo annoying to fail because of such minor thing, especially when their suggested English often looks terrible
In some languages that’s not a minor thing because of the gender. I mean that’s a problem of the language which should improve but for now you have to use the gender for good communication
We’re talking about, say, learning Spanish and Duolingo be like “now translate this very long and overly specific sentence to English”
Then you end up trying to construct the English sentence even though you’re learning Spanish
Here’s an example where I think my sentence is perfectly fine, but it just expected a different word order. It expected me to put If at the beginning, but I didn’t notice it was capitalized.
Korean doesn’t even have capital letters, why is it doing some gotcha about English capitalization when I already know English?
Hi, you created the Korean course right? Thanks for contributing!
If you have any feature requests or suggestions please put them here: Feature Requests
There’s also a collection specific for question types: Question Types Collection
Yeah, I’m just testing it out. For a true Duolingo experience it would need fill in the blank and audio
“Fill in the blank” is now available, just got done coding it.
If you want to try it out, I created a new course “Testing out new question types”.
Yeah agree, I’ll definitely implement that one.
Right now I’m working on “match the cards”.
Edit: For audio I’m not so sure on how I would do it. I don’t think most people would record it themselves when creating a course so I would need to generate it. Then you’d have the issue about correct pronunciation…
https://github.com/mozilla/TTS
Also tortoise tts and a few other options
No license?
In case OP doesn’t know, if a repo hasn’t got a licence it’s implied it’s licensed under “all rights reserved”, so not open source! You need to https://choosealicense.com
it’s implied it’s licensed under “all rights reserved”, so not open source!
Oh, I actually did not know that. I’ll try to remember adding a License right from the get-go from now on, thanks :)
I think you want to use AGPL. people can still make a closed source website out of your project due to the ASP loophole.
Yeah you’re right. I switched it to AGPL.
<Sips licence like a fine wine served at a dinner party.> Ah, yes, GPLv3, exquisite choice.
🎉
It’s GPLv3 now.
oh, right. Forget that every time. I’ll add one.
This is a really great use of LLM! Seriously great job! Once it’s fully self-hostable (including the LLM model), I will absolutely find it space on the home server. Maybe using Rupeshs fastdcpu as the model and generation backend could work. I don’t remember what his license is, though.
Edit: added link.
Thanks! I’m already eyeing ollama for this.
That sounds cool! Is there already a release? If not, don’t rush it :)
Edit: never mind. I just saw the website 😅
Thanks :). Yeah, it’s publicly accessible: nouv.app/. I use it daily already but it still has tons of bugs.
Unless I remove the “Always use secure connections” it breaks on the cert.
Hm that’s very weird. I can’t replicate it and I used some random SSL checker website and it checks out as well.
Really not sure why that’s happening.
WFM. Looks like you’re using Let’s Encrypt, which is fine, and everything seems to be consistent. I think you’re good.
It’s a great looking site at first glance (haven’t signed up yet). I just sandboxed a browser and let it run without forcing HTTPS. Funny thing is that it does show it as being https when disabling https enforcement.
I’ll take it for a spin this afternoon when I get back home (or in my phone when I get bores at the recital my wife is forcing me to go to 🤣🤣🤣).
Thank you!. Let me know if you find out more about the issue. I’ll also keep an eye out for the cause.
Edit: I’ve opened an Issue for this on GitHub: https://github.com/cr4yfish/nouv/issues/2
Is there any interest in getting local models to run using this? I’d rather not use Gemini, and then all the data can reside locally (and not require a login).
I’d be happy to work on this, though I’m a python developer not a typescript one.
Yeah, good idea. It’s possible to do that with WebLLM & Langchain. Once Langchain is integrated, it’s kinda similar to the Python Version so should be do-able I think.
Ah interesting — again happy to help out if there’s anything I can contribute to. I can make a feature request on github if there’s interest.
Please do :). I take any help I can get.
Ohh great project
Thanks :)
Hope to see it growing the best way
@Cr4yfish nice project 🙂
I’m a bit worried about the AI part, though, as you’d want an app whose main purpose is “learning” to guarantee, if not the reliability of the material (since anyone can contribute), at least the reliability of the course generation process that it proposes.
As far as I know, this is not possible with current generative AI tools, so what’s your plan to make sure hallucinations do not creep in?Thanks. My general strategy regarding GenAI and reducing the amount of hallucinations is by not giving it the task to make stuff up, but to just work on existing text - that’s why I’m not allowing users to create content without source material.
However, LLMs will be LLMs and I’ve been testing it out a lot and found already multiple hallucinations. I built in a reporting system, although only reporting stuff works right now, not viewing reported questions.
That’s my short term plan to get a good content quality, at least. I also want to move away from Vercel AI & Gemini to a Langchain Agent system or Graph maybe, which will increase the output Quality.
Maybe in some parallel Universe this really takes off and many people work on high quality Courses together…
Cool project! Is there any plans on releasing a mobile app in the future? I’m allergic to PWAs.
Thanks, haha. I’d love develop a Native App for it too but this is a zero-budget Project (aside from the Domain). PlayStore has a one-time fee so that’s 25€ for Android + 8€/Month for the IOS AppStore just to have the App on there.
In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet. And I’m not developing a Native App for like 3 people excluding myself (I’m an iPhone user).
Soo, yeah that’ll probably not happen :(.
This post gathered a bit of traction. So hopefully more people help out. F droid is a better marketplace for oss compared to playstore because people downloading from playstore act entitles a little, especially towards oss software.
Would be nice for sure… 0 forks yet… but I’m hopeful :D
In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet.
You could’ve considered F-Droid.
Soo, yeah that’ll probably not happen :(.
It’s sad. Anyway, good luck with future development!
Playstore console costs 25€? I paid only 5€ when i made my account
That’s weird. Did a quick google and it does seem to be 25 USD. Last time I made one it were 25 for sure as well - but that one got banned due to inactivity D:
deleted by creator
I personally love PWAs — why the hate for them? Personally I think more apps should be PWAs instead.
Native apps will always be better, imo. I think less apps should use PWAs. No offense to those who use them, though. It’s just my personal preference.
Fair opinion. Native Apps do have some huge advantages, but also some disadvantages.
I’ve coded both before (although way more PWAs) and with Native you also run into Platform issues as long as you don’t ship exclusively for one Platform.
PWAs have a huge advantage here since they run the same everywhere as long as the Platform has a browser which is not safari.
That won’t happen because Apple and Google hate open source.
This is why F-Droid exists.
Can it be used offline?
The UI mostly works offline once loaded in due to aggressive caching. Downloading Course Content was on the initial Roadmap but I removed it since I wasn’t sure if anyone would like the feature.
Syncing stuff is a real pain in the ass but I’ll implement it if at least a couple people want it.
An offline mode would definitely be something I would want, tho it isn’t high priority.
I added it back to the roadmap :).
I don’t know how much of a subset I am, but I still use dictionary softwares from Windows 95~2000 era and Android softwares on a completely offline and vanilla VM, partly due to internet randomly going bad, and partly because I am neurotic about digital contents vanishing once support ends.
Understandable. I added a proper offline mode back to the Roadmap on github.
How does the level creation from a pdf work and does it support languages other than English?
I use Gemini, which supports PDF File uploads, combined with structured outputs to generate Course Sections, Levels & Question JSON.
When you upload a PDF, it first gets uploaded to a S3 Database directly from the Browser, which then sends the Filename and other data to the Server. The Server then downloads that Document from the S3 and sends it to Gemini, which then streams JSON back to the Browser. After that, the PDF is permanently deleted from the S3.
Data Privacy wise, I wouldn’t upload anything sensitive since idk what Google does with PDFs uploaded to Gemini.
The Prompts are in English, so the output language is English as well. However, I actually only tested it with German Lecture PDFs myself.
So, yes, it probably works with any language that Gemini supports.
Here is the Source Code for the core function for this feature:
export async function createLevelFromDocument( { docName, apiKey, numLevels, courseSectionTitle, courseSectionDescription }: { docName: string, apiKey: string, numLevels: number, courseSectionTitle: string, courseSectionDescription: string }) { const hasCourseSection = courseSectionTitle.length > 0 && courseSectionDescription.length > 0; // Step 1: Download the PDF and get a buffer from it const blob = await downloadObject({ filename: docName, path: "/", bucketName: "documents" }); const arrayBuffer = await blob.arrayBuffer(); // Step 2: call the model and pass the PDF //const openai = createOpenAI({ apiKey: apiKey }); const gooogle = createGoogleGenerativeAI({ apiKey: apiKey }); const courseSectionsPrompt = createLevelPrompt({ hasCourseSection, title: courseSectionTitle, description: courseSectionDescription }); const isPDF = docName.endsWith(".pdf"); const content: UserContent = []; if(isPDF) { content.push(pdfUserMessage(numLevels, courseSectionsPrompt) as any); content.push(pdfAttatchment(arrayBuffer) as any); } else { const html = await blob.text(); content.push(htmlUserMessage(numLevels, courseSectionsPrompt, html) as any); } const result = await streamObject({ model: gooogle("gemini-1.5-flash"), schema: multipleLevelSchema, messages: [ { role: "user", content: content } ] }) return result; }
Haha. Well we can’t all actually be Duolingo and employ people to create the courses :D
It’s all just flashcards with extra steps and anki already exists. /shrug
I’ve made custom flashcards for anki to study stuff and I tested this for some similar things and it was a lot faster and easier. Anki feels like it takes forever so the investment to make a custom set is only worth it for things you need to study for a long time.
If all you want is to generate a bunch of flashcards fast and you have a pdf with the info presented clearly it’s an easy method.
Well, yes, in a way at least. I’m not pretending to invent something never done before. Although it already has multiple features that Anki doesn’t have.
Is it for self-host ppl too?
For all projects/apps, I am looking for OIDC, S3 and PgSQL. It’s easier to implement these features earlier and these features will make any projects more popular in the self host community.
Is it for self-host ppl too?
In theory not an issue. I use Supabase, which you can self host as well.
You can also self host the Mistral Client, but not Gemini. However, I am planning to move away from Gemini towards a more open solution which would also support self hosting, or in-browser AI.
I am looking for OIDC, S3 and PgSQL
Since I use Supabase, it runs on PgSQL and Supabase Storage, which is just an Adapter to AWS S3 - or any S3, really. For Auth, I use Supabase Auth which uses OAuth 2.0, that’s the same as OIDC right?
Very cool. You can check out ollama for hosting local ai model.
OIDC is an extension of OAuth2 that focuses on user authentication rather than user authorization. Once OIDC authenticates a user, it uses OAuth2 specifications to perform authorization.
The easiest way to support oidc is thru using lib from your framework/language. All major language should already have oidc lib. Take a look for authelia which has pretty nice doc. We host lots of app and we don’t want to login hundred times for each apps. It’s nice to login once only and all apps play nice to each other ;)
I made an Issue for Feature requests. I’ve put OIDC in there: Feature Requests & Suggestions
Yeah, I’ll probably go with langchain and some user-options for different LLMs.
I’ll also look into authelia
The website looks sooo good!
Cool concept! Good luck with it
Hope you can get around to let switch models, and maybe let people use open-source/open data models?
I’ve also heard about Vercel AI SDK that let’s you use different models with a common SDK so that it doesn’t rely on implementation