Michael, I'm in awe of your well-organised, clear thinking, and I'm rushing to meet the deadline for the essay competition, since I saw it late. I hope to upload my essay to your architecture grading machine later today and am very excited to see what it responds. Meanwhile, some questions are running through my mind.
I'm assuming that all the essays that we, enthusiasts, upload will further train your machine. But I wonder what you trained it on initially. I seem to remember reading in one of your pieces that you used 1000s of essays. You seem to be a generous person, but if you trained your machine on other people's essays, how do you go about compensating them? How does it work with copyright? And what is the underlying AI? Is the analysis done purely by algoritm, if so, what is the AI part of it?
These questions are not covert judgement. I'm just curous and trying to form an opinion, given all the controversy around LLMs and other AI. Anyway. Time to get back to my essay!
Hey Jessica, I have some details on this in the terms and conditions of the essay prize (on the EA website). The main thing to take comfort in is that I’m not using uploads to train/fine-tune the app. It’s effectively an algorithm that uses AI/prompts in very granular ways. I evaluated it in the sense that I occasionally watch how it thinks and then make manual adjustments, but in no way does it take your source text, or even the feedback from your source text, and then use that to automatically shape future sessions. I hope that helps clarify!
Your machine is awfully good, Michael, even if it has just ruined my weekend ahead, Still sooo much to do, but I'm going to work thorugh every single recommendation it brought up. And pray I make the deadline.
Excited to use this! I joined your session for Act Two and am just getting started with Substack as a side project, so this sounds perfect.
As for the day job… I work with a team of AI engineers who built an essay marking assistant for teachers using a very similar methodology! One potential difference in approach was we had a strong feedback loop between labelling and iterating on the prompt. Happy to share more info on what they did if helpful
Thanks for the note Laura! Keep me posted on how it works out for you. Curious to know how the needs might differ for a writer depending on their stage in the process. Also interested to learn more about your essay grading experience. Will sed you a DM.
Your dedication to this project is inspiring Micheal. I can't imagine you won't do well with this tool and help many people. I'm personally completely daunted by the idea of trying to respond and write in tandem with extensive analysis, but I just hate feedback in general because I'm far too over-sensitive and insecure about my writing. So of course that means I have to at least try it. Running my first draft through the system now. The fact that it takes 20 min for the tool to do the analysis already lends some credibility to the process. "It"- or whatever the thing is that is doing the analysis, is actually doing some work to review what I wrote. I'll let you know how it all lands once the results are in! : ) By the way, the only reason I'm willing to interface with this at all is because you clarified that it doesn't do any writing for you. Thank god, and kudos for holding that line.
…very excited to fail this machine on all accounts…my goal is to get a perfect zero at least once…and then to try and ace the test and win enough money to force you to meet me in las vegas for a weekend of carrot top and rida rudner…
Technically, it might be straightforward to implement, but I wouldn't be able to understand the feedback to know if it's accurate! (makes it hard for me to evaluate/refine). I suppose there is a chance that it's close to 1:1, but I'd want to test it with proper translations, and maybe even work with someone fluent in the language.
Basically, it's a potential can of worms (because then I also need to run this process for every language), and my general philosophy is to offer as little functionality as possible and make sure it's useful before I expand it too far (ie: I could also easily build a "chat with your essay" feature, but the work needed to build it right is significantly more).
For now, the recommended workflow is English > English, but it can seemingly do Spanish > English. Would it be possible to upload in Spanish, and then use the browser to translate English feedback > Spanish? That could be a temporary workaround.
I just bought a starter pack and have started running an essay of mine that got up in a major literary mag- and I'm already excited!! I have to say no AI tools thus far have failed to capture my genuine excitement and I think this is the first such encounter!!
Michael, I'm in awe of your well-organised, clear thinking, and I'm rushing to meet the deadline for the essay competition, since I saw it late. I hope to upload my essay to your architecture grading machine later today and am very excited to see what it responds. Meanwhile, some questions are running through my mind.
I'm assuming that all the essays that we, enthusiasts, upload will further train your machine. But I wonder what you trained it on initially. I seem to remember reading in one of your pieces that you used 1000s of essays. You seem to be a generous person, but if you trained your machine on other people's essays, how do you go about compensating them? How does it work with copyright? And what is the underlying AI? Is the analysis done purely by algoritm, if so, what is the AI part of it?
These questions are not covert judgement. I'm just curous and trying to form an opinion, given all the controversy around LLMs and other AI. Anyway. Time to get back to my essay!
Hey Jessica, I have some details on this in the terms and conditions of the essay prize (on the EA website). The main thing to take comfort in is that I’m not using uploads to train/fine-tune the app. It’s effectively an algorithm that uses AI/prompts in very granular ways. I evaluated it in the sense that I occasionally watch how it thinks and then make manual adjustments, but in no way does it take your source text, or even the feedback from your source text, and then use that to automatically shape future sessions. I hope that helps clarify!
Thanks Michael! That is very reassuring!
Your machine is awfully good, Michael, even if it has just ruined my weekend ahead, Still sooo much to do, but I'm going to work thorugh every single recommendation it brought up. And pray I make the deadline.
Excited to use this! I joined your session for Act Two and am just getting started with Substack as a side project, so this sounds perfect.
As for the day job… I work with a team of AI engineers who built an essay marking assistant for teachers using a very similar methodology! One potential difference in approach was we had a strong feedback loop between labelling and iterating on the prompt. Happy to share more info on what they did if helpful
Thanks for the note Laura! Keep me posted on how it works out for you. Curious to know how the needs might differ for a writer depending on their stage in the process. Also interested to learn more about your essay grading experience. Will sed you a DM.
Congratulations Michael!
I can’t wait to upload a draft into the maw of your beautiful and terrible new machine.
You’re doing God’s work raising up a generation committed to achieving “discipline, focus, presence, patience, resilience, and determination”.
Here’s to standards!
Your dedication to this project is inspiring Micheal. I can't imagine you won't do well with this tool and help many people. I'm personally completely daunted by the idea of trying to respond and write in tandem with extensive analysis, but I just hate feedback in general because I'm far too over-sensitive and insecure about my writing. So of course that means I have to at least try it. Running my first draft through the system now. The fact that it takes 20 min for the tool to do the analysis already lends some credibility to the process. "It"- or whatever the thing is that is doing the analysis, is actually doing some work to review what I wrote. I'll let you know how it all lands once the results are in! : ) By the way, the only reason I'm willing to interface with this at all is because you clarified that it doesn't do any writing for you. Thank god, and kudos for holding that line.
…very excited to fail this machine on all accounts…my goal is to get a perfect zero at least once…and then to try and ace the test and win enough money to force you to meet me in las vegas for a weekend of carrot top and rida rudner…
Congrats on launching this, Michael! I am stoked to try it on for a size! You're an inspiration the way you bring a project to reality 👏
I do not have the words. This is just brilliant work.
Awesome! Looking forward to loading one into this tool. Thank you Michael for all the work you did on this.
Is it trained just for ENG or any other language?
Same question. Can I submit my essay in spanish? Will I get answers in this language?
Currently it returns the feedback in English. If you upload in another language, it seems to handle the translation back to English on its own.
Do you think it’s possible to activate the feature to return the feedback in Spanish?
(I don’t know how this AI tools work and If I’m asking a dumb question)
I’m very excited for this tool, very very very excited. For the first time I see potential to improve my writing.
I would use it to elevate my weekly newsletters
Thank you for your amazing work.
Hey Manuel, I will look into it.
Technically, it might be straightforward to implement, but I wouldn't be able to understand the feedback to know if it's accurate! (makes it hard for me to evaluate/refine). I suppose there is a chance that it's close to 1:1, but I'd want to test it with proper translations, and maybe even work with someone fluent in the language.
Basically, it's a potential can of worms (because then I also need to run this process for every language), and my general philosophy is to offer as little functionality as possible and make sure it's useful before I expand it too far (ie: I could also easily build a "chat with your essay" feature, but the work needed to build it right is significantly more).
For now, the recommended workflow is English > English, but it can seemingly do Spanish > English. Would it be possible to upload in Spanish, and then use the browser to translate English feedback > Spanish? That could be a temporary workaround.
Hey Michael,
Thank you for the clarification.
I see your high standards and your willingness to create a great software.
As a test, I plan to submit the same essay, twice.
1- Spanish ->English
2- Chat GPT Translated spanish to english-->submit it to EA
And see any differences in feedback and usefulness of it.
My guess is that will be 10x better feedback than no feedback
After that, I'll share with you my conclusions. Hopefully, useful
Thanks for your work.
Thanks for running this experiment!
they all said it couldn't be done and then, of course, Michael Dean went and did it
I just bought a starter pack and have started running an essay of mine that got up in a major literary mag- and I'm already excited!! I have to say no AI tools thus far have failed to capture my genuine excitement and I think this is the first such encounter!!