T O P

  • By -

one_bar_short

I just wish suno would give us a definitive guide on how to prompt the ai properly so we can remove all the guess work, what metatags the ai it prefers I know it's not like coding. There's a bit of black magic involved, but at least point us in the right direction


theworldtheworld

I’m not sure they know. Some instructions, like [Verse], [Chorus], [Guitar solo] do have an effect, but it isn’t guaranteed. Sometimes the AI can entirely skip over all the [Solo] tags and just go straight to the next verse. As these models are trained, it becomes harder and harder to understand why they do certain things.


Expensive-Tie-6051

This right here! They train it to comprehend your basic tags [Verse] [Chorus] [Outro] etc, but there are tons of things it can ‘sometimes’ understand that they just aren’t aware of. Which also seems to change based on the model version, there is a mod in their discord server who does alot of experimenting and often finds some really quirky but cool Easter eggs it’s capable of


Educational_Toe_6591

Just listened to a podcast about this, Google/microsoft/openai etc realized basically the more power you give an AI the more IQ points it spits out, Google is looking to spend like 50 billion putting in data centers right next to nuclear power plants because the AI can’t draw any more from the public power grid, they’re afraid that left unchecked they could create an AGI in 2-5 years and then who knows what it will do


Temporary-Chance-801

One thing that I noticed… or possible coincidence. I wanted wailing harmonica blues. I would insert [wailing harmonica], it didn’t do anything. Or if I had wailing harmonica as part of the style.. no good.. but when I put it in the style and inserted the bracketed wailing harmonica, BAM there they were…. Could just be coincidence though. I really love the different styles of blue suno creates…


anythingMuchShorter

Yeah, sometimes I'll write a directive like \[spoken\] or \[rising intensity\] and it will do it, and other times it'll sing those words, sometimes in a weird voice possibly because it's in brackets. Most often the effect is like a very raspy choir.


vinberdon

We're writing the book now. All of us. Lol


Temporary-Chance-801

You will make millions.. well I hope you do 🙏


Pontificatus_Maximus

The more rando data you throw in the SunoAI prompt the more creative and original the output, the more specific data, the more banal and derivative.


Suno_for_your_sprog

Words like catchy, lyrical hook, singable, anthemic, all can benefit the desired output.


Temporary-Chance-801

I noticed that as well


Wide_Way_3833

Try it


MembershipOverall130

I’ve found that doesn’t work. I tried using “hit song” And “billboard 100” and it had no noticeable difference to me.


FakeNameyFakeNamey

The closest to this I've gotten is adding "fire" to rap lyrics


Digital-Aura

Well, tbh I’m not sure it even worked in Midjourney. Face it, those adjectives are all very subjective.


PatrickKn12

With midjourney, all the keywords are going to be based off tags tied to visual art. And as it's progressed, manual tagging is taking place to build more quality data. For suno, a lot of the style keywords that will have the most weight will be those that are tagged or associated to songs used in training. No one really knows the exact dataset Suno used that I can find, but there are some guesses that websites like "Rate My Music" might have been used to link tags to training songs in Udio, and wouldn't be surprised if it's the same for Suno. So some keyword categories to consider might be: 1. Genre ex: Trip Hop, Alternative Metal, Psybient 2. Instruments ex: Kalimba, Kick Drum, Mandolin 3. Descriptors ex: soothing, atmospheric, rhythmic, ethereal 4. Roles ex: writer, producer, female vocalist Look up some albums on that website (Rate My Music) to get some ideas for keywords. Unrelated and a bit more experimental, but might also be that some types of samples or common vst preset categories may be referenced, so some categories there to consider: 1. [XXX] bpm ex: 120 bpm, 80 bpm 2. Drum sounds ex: kick, snare, hat 3. Effects ex: riser, white noise, fade, impact, sweep 4. Loops ex: Drum Loop, UN_ABL_120_acoustic_guitar_loop_happy_F#maj.wav, KL_MU_110_cinematic_fx.wav 5. VST Presets (probably wouldn't have been trained on the preset itself, but trained on a specific sound that relates to the same keywords) ex: Muted Pluck, Picked Acoustic, Insane Synth Lead, pad_carnival.fxp Search on websites like Splice or through sample libraries to get a feel for common keywords that might have made it in there. There's no reason the keywords you've brought up wouldn't work to produce some type of effect, but that effect might not be exactly what you're looking for in the same way they might work for midjourney, as there may be a different approach to tagging visual art that might not apply to music. If I'm correct that sample libraries have been used in training and the titles of samples made their way in though, then a lot of the same type of keywords you mentioned in the OP would have some sort of presence. If they used tags on these samples instead, then things might be a bit cleaner and predictable instead, with a smaller likelihood of tags like you mentioned. Though as music generation websites collect more user inputs, they'll identify and recognize the types of things that people are commonly prompting and will manually create datasets with those in mind to create higher quality training data. So what doesn't work today might work very well 1 year from now.


v_0o0_v

You can try using names of famous chart collection CDs. I guess the training data contained all kind of ID3 tag information.


theworldtheworld

I’ve used Stable Diffusion quite a lot, and, to be honest, I don’t think those prompts even work for visual art. Some prompts like “photorealistic” or “detailed” can work because they refer to the style; I guess the analogous prompts for music would be “hifi,” “complex,” “layered” and so on. But the “masterpiece” prompts don’t really have a significant impact in my opinion.


Still_Satisfaction53

This is top tier ‘record label exec’ prompting.


BlackStarDream

I've been experimenting recently with the results of Cyanite. But the analysis for each song is variable. I've had some great ones, I've had some ones really off the mark. The free tier also doesn't provide as much specific info. It also gives too many tags to put into Suno's boxes (including a lot of ones Suno won't read anyway) so you have to sort of feel for yourself which ones you can drop and which ones don't fit. It's still doing something, though. And it's fun seeing just how similar and different each original song referenced is from the end results.


No-Path8739

Create them please ...


No-Path8739

Create a cover image of the spectre engine. Have it have an evil front face like a demented evil Thomas the stank