|
Post by 101 on Feb 20, 2023 20:45:20 GMT
how do i create random envelope shapes using the ornament and crime synth module?
The Ornament and Crime (O&C) module is a powerful digital synthesizer that provides many features, including a wide range of envelopes that can be used to shape the sound of your patches. Here's how you can create random envelope shapes using the O&C module:
Select the "Quantermain" app on the O&C module.
Navigate to the "Quad Envelopes" mode.
Set each of the four envelopes to a different shape using the "Shape" knob.
Adjust the "Attack," "Decay," "Sustain," and "Release" parameters to fine-tune the envelope shapes.
Use the "Spread" parameter to introduce some randomness to the envelopes.
You can also use the "Pulse" input to trigger the envelopes in different ways, which can create more interesting and varied envelope shapes.
If you want to take things further, you can use the "CV" inputs to modulate the envelope shapes and create even more complex and dynamic sounds.
Overall, the O&C module provides a lot of flexibility for creating envelope shapes, and experimenting with different settings and modulations can lead to some really interesting and unique sounds.
|
|
|
Post by pt3r on Feb 21, 2023 7:54:51 GMT
Clickbait much? Edit: 101 Just for my understanding; is this the transcript of the answer you got from chatgpt on how to create random envelope shapes using the O&C module?
|
|
|
Post by 101 on Feb 21, 2023 13:39:27 GMT
I can see why Google is in such a flap now over ChatGPT. It basically destroys Google search.
|
|
|
Post by robertlanger on Feb 21, 2023 13:56:31 GMT
I almost don't dare to say, but ChatGPT has written me ovbiously useable code for a digital oscillator... Not tried it yet IRL, but it looks not bad. Mindblowing, totally...
|
|
|
Post by pt3r on Feb 21, 2023 14:18:00 GMT
I almost don't dare to say, but ChatGPT has written me ovbiously useable code for a digital oscillator... Not tried it yet IRL, but it looks not bad. Mindblowing, totally... Just one simple question, if there were no such things like gitlab or other opensource code repositories, how much code would chatgpt 'write'? Same thing goes for music and other forms of art, i feel like chatgpt is nothing but a hightech way of selling off knowledge without giving proper credit to the original source. #notaluddite
|
|
|
Post by 101 on Feb 21, 2023 19:05:10 GMT
Clickbait much? Edit: 101 Just for my understanding; is this the transcript of the answer you got from chatgpt on how to create random envelope shapes using the O&C module? Yes. Exactly what it replied in response to the question.
|
|
|
Post by pt3r on Feb 21, 2023 19:19:13 GMT
Yep it's confirmed, chatgpt is google on steroids.
|
|
|
Post by admin on Feb 21, 2023 22:31:25 GMT
I almost don't dare to say, but ChatGPT has written me ovbiously useable code for a digital oscillator... Not tried it yet IRL, but it looks not bad. Mindblowing, totally... Just one simple question, if there were no such things like gitlab or other opensource code repositories, how much code would chatgpt 'write'? Same thing goes for music and other forms of art, i feel like chatgpt is nothing but a hightech way of selling off knowledge without giving proper credit to the original source. #notaluddite I have written a very long reply to this but I chose not to post it. I think it's such a complex topic that discussions can become heated and even divisive very quickly. I hope that our community can stay focused on helping each other out in our journey to appreciating and finding joy in making electronic music.
|
|
|
Post by pt3r on Feb 22, 2023 8:21:11 GMT
Oh but I don't feel like being divisive I think it's just very interesting to see how A.I. (I'm pretty sure that google search engines also contain AI) like chatgpt will change the way we look at copyright and licenses whether it's music or software or scientific research. Will information obtained by chatgpt ever be allowed as source material? I doubt it, but some people might use input from chatgpt and thus (unknowingly) commit plagiarism.
|
|
|
Post by interstar on Feb 22, 2023 15:35:57 GMT
I've been using ChatGPT to write code for a while now.
Here are a few observations.
What it's really good at is translating code from one language to another. It's successfully translated a small CoffeeScript app into Python for me, doing about 90% of the work. I've had it translate Python into Java. Python into Clojure. Etc. And it doesn't make many mistakes at that level. Though you have to feed it smallish chunks at a time.
When asked to do small, well defined coding tasks it's OK. I've had it write me a Scrapy script to scrape a website and pull the images down from it. I've had it write autocomplete for a field on a web-form in ClojureScript. And various other small tasks. It's OK at this.
It's pretty good as a kind of front-end to boring-to-read documentation. You can ask it a direct question about, say, the right method to use for something in an Android app, and it gives good pointers. (Given I find the Android documentation horrendously hard to navigate, this is a big win)
It's pretty good as a kind of "rubber duck" to talk to about your problems. If you describe a programming problem and say "why might that be" it can give insightful ideas. It won't solve the problem for you but can help you think through the problem.
Where it's not so great is the bigger picture. For example, it's not good at knowing if a solution it's giving you is deprecated. Or even in joining up knowledge it has in one context and another.
I had one case of asking it to help me write a Processing sketch to make music visualizers by reading in an MP3 file and then recording it with an animation into a video file.
It said something along the lines of "sure, here's code for that, using the MovieMaker class". I tried using the code GPT wrote, there's no MovieMaker class. I ask where it is and we go through a couple of rounds of me telling it that the MovieMaker class is available in X library". Or maybe the Y library. I establish that there is no MovieMaker class. I ask ChatGPT if MM really exists and it tells me that it was taken out of Processing in Processing 3.0. I ask why; it knows that the MovieMaker class was removed because it depended on an open-source QuickTime library that hadn't been maintained for years.
So it DOES know there's no MovieMaker class. And when and why it was removed. But that doesn't stop it trying to use that class in code. And making random guesses about where it can be found, until I force it to admit it's not there.
So I ask for an alternative. It suggests using the OpenCV library which IS still available in the Processing library.
Somehow it veers into Python and starts writing Python OpenCV code. I remind it we're in (Java) Processing. It writes me Java Processing code to record the animation to a video with OpenCV. However I notice it's not writing the music into the video, just playing the music while writing the file. So I ask GPT to actually put the audio into the video file too. It says "sure ... here's how to put the audio into the video file". It starts writing the code then freezes. I ask it to try again and it freezes again. Eventually I ask if it's possible to use this library to incorporate audio into the video file it's making. Ah, no. "Sorry about that" it admits. The OpenCV library for making videos doesn't handle audio (Obviously. It's a computer vision library). Again ChatGPT "knows" this when asked, but can't use that knowledge when generating the code. I ask it for another solution. And it goes back to using the MovieMaker class. Another way it led me up the garden path a couple of days ago. I wanted to write some code to check broken links from my site. I had this great idea that the code for checking for broken links could itself run in the browser, alerting me if any of the links on a page were broken. After a bit of back and forth, Chat helped me write the code. I run the code. And get a CORS error. For those who know anything about this, it's OBVIOUS that you can't really write code to check for broken links in the browser, because CORS is going to prevent an application that came from one server hitting any other web sites. That's part of the security protection of the web. But Chat happily goes along with me for an hour helping me write this useless code that can't possibly do what it's meant to. Again because it has no wider perspective. So, I think ChatGPT is phenomenal. I've just signed up to the paid tier. I think it's very useful. But you have to be VERY careful. You can't use it to write things you don't already understand and can't eyeball the correctness of. It WILL go wrong at the drop of a hat and unless you are able to spot that immediately, it can waste you a whole lot of time. WRT the point that ChatGPT wouldn't have been able to write code without GitHub and StackOverflow. Sure. But we wouldn't be able to write code without our university courses and O'Reilly books. Or wherever we learned from. Computer programming knowledge doesn't just bubble up in us spontaneously. We all need to see examples to copy and learn from.
|
|
|
Post by 101 on Feb 22, 2023 17:19:00 GMT
I did try and wind it up a bit at first. what's the square root of -1, what's 1 divided by zero. What does 58008 upside down look like on on a LCD display. But it was having none of it - actually it does do the last one .
|
|
|
Post by interstar on Feb 26, 2023 12:41:35 GMT
Over the last couple of days, ChatGPT has helped me, someone with only passing acquaintance of Docker, to "dockerize" a medium sized LAMP stack application (Apache, MySQL, PHP). This isn't my stack so I'm not an expert in any of these; and it wasn't my application, though I've been helping someone to migrate it to another machine over the last week.
Nevertheless, through a dialogue with ChatGPT, I got it containerized. GPT wrote about 80% of the code to go in my docker-compose.yml and my Dockerfile. It knew how to set me up with three containers (db, phpmyadmin and web application) and have them communicate with each other correctly, etc.
Although there were some issues it failed to help with. And let me spend some time on one without resolving it, I didn't catch it giving me much genuinely bad code. Only once or twice did it give me explicitly wrong information such as the wrong name of a library etc.
Of course, this is a fairly boilerplate activity anyway so AI shouldn't get too much wrong. Nevertheless, I think it was still impressive.
Could I have solved this problem in the same time with just internet documentation and StackOverflow? I'm pretty sure I could have, yes.
BUT ... and this is something I think is important ... it would have required MUCH more diligence and concentration from me for this. With ChatGPT I could ask questions directly and get good enough answers WITHOUT having to crawl several SO answers. And without the high risk that I would get bored or side-tracked. I probably wouldn't have done this 2 days by myself, not because I'm incapable of finding and groking the documentation, but because I wouldn't be focused enough to do it. As it is, when rebuilding the container takes 2-3 minutes every time you make a mistake it's easy to get side-tracked and stop. In this case the dialogue format and the quick responses from GhatGPT kept me engaged enough that I stayed working.
This is possibly the greatest value of it. Not that it does things we can't. But that it can turn a solitary activity requiring a lot of self-discipline, into a (pseudo) social activity where you get the psychological boost of working through having a conversation.
|
|
|
Post by pt3r on Feb 26, 2023 19:38:57 GMT
|
|
|
Post by robertlanger on Feb 27, 2023 8:39:09 GMT
I almost don't dare to say, but ChatGPT has written me ovbiously useable code for a digital oscillator... Not tried it yet IRL, but it looks not bad. Mindblowing, totally... Just one simple question, if there were no such things like gitlab or other opensource code repositories, how much code would chatgpt 'write'? Same thing goes for music and other forms of art, i feel like chatgpt is nothing but a hightech way of selling off knowledge without giving proper credit to the original source. #notaluddite I'm wondering about this too; but the results I got didn't looked like a simple copypaste from some github repository; I was able to refine my request in different directions, e.g. changing the operation of certain pointers that I had described ambiguous in a previous request, changing the signature of functions, change the coding style to camelCased variable names... But of course, the IP side has to be taken into account, also and especially with all the picture generators. As Carsten stated, it's an enormous field of questions and challenges that is covered thoroughly on other places in the web. Anyway, amazing, exciting, challenging, frightenting, anything, this AI stuff!
|
|
|
Post by interstar on Mar 1, 2023 12:19:32 GMT
Yeah. It's definitely more than just copying and pasting.
It's "synthesizing" the concepts it knows at some deeper level.
Yes, it's constrained within a convex hull of all the things it's seen before. But it's in a very high dimensional space. So there's a lot of room for novel interpolations within that hull.
|
|
|
Post by pt3r on Mar 17, 2023 14:25:16 GMT
|
|
|
Post by pt3r on Apr 10, 2023 9:51:37 GMT
|
|
|
Post by Kyaa on Apr 10, 2023 17:16:42 GMT
Yeah. It's definitely more than just copying and pasting. It's "synthesizing" the concepts it knows at some deeper level. Just one simple question, if there were no such things like gitlab or other opensource code repositories, how much code would chatgpt 'write'? Same thing goes for music and other forms of art, i feel like chatgpt is nothing but a hightech way of selling off knowledge without giving proper credit to the original source. #notaluddite It wouldn't! And there is definitely dubious legality to it. One can obviously make the case that if a human learns to code they aren't plagiarizing, and if a large language model learns it's not (and the user isn't) either. I feel like there is a general tendency to anthropomorphize LLMs though, and it glosses over the fact that LLMs don't 'learn' the same way we do. All LLMs do is pattern match. They generate one word after another based on patterns observed in a massive amount of data gathered from a variety of sources (the question itself, context from the conversation, searches it runs on the web, etc...). The illusion of intelligence comes from the sheer amount of data these programs have access to. At the end of the day though, these words literally come from people's work. This is fine if the entire training model is open source data, but that's unfortunately highly unlikely.
|
|
|
Post by interstar on Apr 15, 2023 9:20:53 GMT
"One can obviously make the case that if a human learns to code they aren't plagiarizing, and if a large language model learns it's not (and the user isn't) either. I feel like there is a general tendency to anthropomorphize LLMs though, and it glosses over the fact that LLMs don't 'learn' the same way we do.”
What's so fascinating about neural networks, and particularly this generation of them, is that they DO learn in a way that's uncannily similar to the way humans do. Of course, not identically, but it's far more like human learning than the assertion that "they are just copying" gives credit for.
So one example of this is that while, yes, the underlying network is pretrained on huge amounts of data, you can then "fine tune" a transformer for a specific task on only a few specific examples of data related to that task. This is very like the way humans might spend their first seven to ten years gaining competence in their native language, but only need a few school lessons to gain competency in writing a particular poetic style, or a physics essay.
That's the kind of behaviour that shows that the "it's just a collage of things it's already seen" view really doesn't capture what's going on.
|
|
|
Post by Kyaa on Apr 16, 2023 19:43:19 GMT
"One can obviously make the case that if a human learns to code they aren't plagiarizing, and if a large language model learns it's not (and the user isn't) either. I feel like there is a general tendency to anthropomorphize LLMs though, and it glosses over the fact that LLMs don't 'learn' the same way we do.” What's so fascinating about neural networks, and particularly this generation of them, is that they DO learn in a way that's uncannily similar to the way humans do. Of course, not identically, but it's far more like human learning than the assertion that "they are just copying" gives credit for. So one example of this is that while, yes, the underlying network is pretrained on huge amounts of data, you can then "fine tune" a transformer for a specific task on only a few specific examples of data related to that task. This is very like the way humans might spend their first seven to ten years gaining competence in their native language, but only need a few school lessons to gain competency in writing a particular poetic style, or a physics essay. That's the kind of behaviour that shows that the "it's just a collage of things it's already seen" view really doesn't capture what's going on. Well, I don't agree with the assertion that LLMs learn in a human-like manner, as they can't exhibit intent. Fine tunings of the LLM could give the illusion of intent, but those fine tunings are done by a human. Setting that aside and getting back to the original point though, there is still definitely a legal issue to what LLMs are currently doing. There is a pretty clear precedent that the patterns that AIs don't respect the license of the data they are trained on. A good example is DALL-E outputting the shutterstock logo in generated artworks.
|
|
rappel
Junior Member
Posts: 51
|
Post by rappel on Apr 29, 2023 14:12:37 GMT
I almost don't dare to say, but ChatGPT has written me ovbiously useable code for a digital oscillator... Not tried it yet IRL, but it looks not bad. Mindblowing, totally... I just learned ChatGPT can translates prompts to useful code in Arduino, and Micropython for the Raspberry Pi Pico... I haven't yet started, but this is gonna be huge...I agree, totally mindblowing... Here is an example: Prompt: Create a Code a simple saw wave synthesizer in micro python ChatGPT: Sure, here's an example code for a simple saw wave synthesizer in MicroPython: python Copy code import math import pyb # Define the pin for the speaker output SPEAKER_PIN = pyb.Pin("X1", pyb.Pin.OUT_PP) # Define the frequency and duration of the saw wave FREQUENCY = 440 # Hz DURATION = 1000 # ms # Calculate the number of samples to generate SAMPLE_RATE = 8000 # Hz NUM_SAMPLES = int(SAMPLE_RATE * DURATION / 1000) # Generate the saw wave samples samples = [] for i in range(NUM_SAMPLES): sample = int(255 * (i % (SAMPLE_RATE / FREQUENCY)) / (SAMPLE_RATE / FREQUENCY)) samples.append(sample) # Play the saw wave samples through the speaker for sample in samples: SPEAKER_PIN.high() pyb.udelay(sample) SPEAKER_PIN.low() pyb.udelay(int((1 - sample / 255) * (1 / FREQUENCY - sample / SAMPLE_RATE) * 1000000))
|
|
|
Post by pt3r on May 19, 2023 9:35:51 GMT
|
|
|
Post by Kyaa on Jun 16, 2023 16:37:02 GMT
Kyle Hill recently did a video in which he breaks down how Chat GPT works in a 'relatively' digestible format.
|
|
|
Post by pt3r on Jul 6, 2023 9:34:15 GMT
|
|
|
Post by pt3r on Jul 12, 2023 19:40:04 GMT
|
|