How I recorded an album on my own, in my room
Written by Admin on March 26, 2020
A few months back, when I started, I mostly only knew how to play the guitar. Since then I’ve learned and done a lot more than that. Almost everything happened in my room. (The next picture has all I used)
For the sake of overviewability i’ll try to first layout a list of things I learnt through experimenting, researching and tutoring myself, to reach the final goal of publishing. Following that i’ll talk about my experience and journey, and try to explain in more depth each topic: important things I retained, major breakthroughs, external resources I used, and whatever details else.
Disclaimer: I am, after all, still an amateur.
The album, naturally, has some flaws. I won’t be able to write down literally everything I know and did happen. The following text are my views and ponderations on the topic, so it should be read as that.
If you disagree with any information here let me know why 🙂
1 — Recording
Learned what hardware and software setup I needed to record myself to make my own record. Also, room acoustics and other.
2 — Music theory
Learned how to conjugate and play instruments together to create music. Some basics.
3 — Drums
Learned basic drum playing, then digital drum machines.
3.5 — Extra recording
Very early recording (using some of the early knowledge mentioned above). It’s amazing how much better I got
4 — Singing
Learned how to get better at singing — and did it. (It was a difficult wall to overcome).
5 — Software Instruments
Learned how to play instruments with the computer keyboard and with a MIDI keyboard, and add texture to my music I couldn’t have been able to otherwise. (Music theory came very handy for this).
6 — Mixing
Learned to mix, at least decently. (I didn’t expect it to be this hard).
7 — Mastering
Learned it’s not the same as mixing, and how to do it well enough (through someone else’s workflow).
8 — Finishing
Learned to say “this is complete, finished”. (This is a subjective/opinion chapter — I had trouble getting over this step).
9 — Album art
Learned basic photo editing to create a picture I had a crazy idea for.
10 — Music Video
Learned only how to use certain tools in iMovie. I mostly just experimented my way to an outcome.
11 — Publishing
Learned how to publish songs to streaming platforms.
12 — Promotion
Learned and developed some strategies as an attempt to get listened to. (Still struggling)
13 — Overview
tl;dr: There’s really not one. There’s this introduction, the in-depth chapters, and a conclusion/overview. If you want to listen to the album — it’s called Ununited by Romes — Spotify | Apple Music | Youtube | … — maybe you want to listen while reading the article
a) Audio Interfaces
Have you ever wondered how to record your guitar to the computer?
The very first thing I acquired to record myself was an Audio Interface.
Most audio interfaces (like mine) have a microphone (pre-amp) channel and, often, a second line channel, for instruments like guitars or keyboard.
If you are using a mic, you need a pre-amp. Microphones produce weak signals (mic level) which must be boosted up to line level. This is what a pre-amp does. It may be integrated in to the microphone or the mixer or the audio interface or a stand alone unit, but there is one somewhere with any microphone you use.
When plugged into my computer I can change the sound input to this audio interface. This way my DAW (Digital Audio Workspace — the software I use to record (more on this later)) detects input sound from whatever is entering the audio interface.
b) Microphone Types
You can get very specific with microphones. I used a regular dynamic mic my grandfather found in his garage.
Here’s a short and concise explanation on main microphone types: The Different Types Of Microphones Explained
I used one standard cable I bought at a standard music store to connect my guitar and keyboard to the second line on the audio interface
A digital audio workstation (DAW) is an electronic device or application software used for recording, editing and producing audio files.
I started out by using Garage Band — a free option for Mac. There might be other free DAWs for Linux and Windows, but since I haven’t used any, I’ll leave that research for whoever needs it. Garage Band allowed me to record tracks and I edit the sound, for example, through digital amps.
It was great but when I started learning more about production I migrated to Logic Pro X. Garage’s projects run in Logic.
Logic Pro X allowed me to do more. Logic Pro X, and other professional DAWs, give more access directly to plugins and to manipulating them. With Logic Pro X I started my mixing journey. A story told further on. I stuck with Logic Pro X until the end — it worked for me.
a) Microphone Stand
Right from the start I bought a microphone stand so I could sing while playing guitar. This proved to be very useful when recording. I could play while singing, it was the best way to record my acoustic guitar, I could also just sing (for some songs) without holding the mic, and also experiment stuff like recording vocals from the other side of the room (wasn’t actually included in any song).
b) MIDI Keyboard
Maybe halfway through the album I was offered a small midi keyboard. I was, without a problem, getting by without this midi keyboard — your computer keyboard, in most DAWs, can be played to generate midi tracks. I’ll get into more detail on this further on. This midi was used to more easily play and improvise with digital instruments.
c) Microphone Pop Filter
Unfortunately I didn’t get the chance to buy a microphone pop filter, but It would have been a really useful addition to my setup — it’s a noise protection filter that helps reduce/eliminate plosives
Plosive thumps on vocal recordings are caused by strong blasts of air that result from certain consonant sounds hitting the microphone and creating large pressure changes. It’s far better to prevent them rather than attempting to fix them in the studio.
I had to use alternative methods to reduce plosives, like changing the microphone angle, and singing from further away, to avoid them.
IV) Room acoustics
I read some about room acoustics. I couldn’t afford to soundproof my room, I didn’t try at all. Soundproofing might help reduce noise reflections, but it’s not desirable to make your room “dead”. Read more on your own about soundproofing if you think you require it. I decided not to.
It’s good to sing / for the mic to be around the center of the room. Being near the walls means capturing more reflections. In “the center” you are more likely to get a more defined sound.
I think my first major breakthrough was understanding sound level. This means — at what volume/level should you record?
The first tracks I recorded were constantly clipping. I knew nothing about this.
Clipping occurs when your recording levels are too loud. The interface and/or the DAW cut all the excess loudness from the track, distorting it. This leads to a big quality loss, and unwanted distortion on what you recorded. You also have no headroom (loudness space to work on the track).
When the recording level goes above 0dB you are almost certainly clipping. When I had a well more defined workflow, I was recording at levels that peaked at 6–8dB (this means — the loudest parts of the recording didn’t exceed 8 to 6 db.) This was enough headroom to then mix the track.
From here I started recording some guitar parts to the DAW, and experimenting.
If you don’t know music theory at all, you can still make music. I used to do this — played notes arbitrarily on an instrument on top of another, and changed the notes around until they sounded right with the music — then: memorize, and play to record.
Discovering music theory allowed me to dwell into more full compositions, and compose with way less effort.
Music theory came to me in the form of randomly seen youtube videos, peers mentioning it, and some threads on HN.
I’ll just talk about a major breakthrough for me — keys — the best way I can explain it (not very technical).
For further studies, reference a great (CC) book (that I found on HN front page): Music Theory for Musicians and Normal People
I) So… Keys!
A key signature designates notes that are to be played higher or lower than the corresponding natural notes and applies through to the end of the piece or up to the next key signature.
As this was easier for me to visualize on the piano, I’ll also mention how I saw it there.
Or, as I saw it at the time: “There is a set of 7 notes I can play” — “I can pick a sequence of notes — some white and some black keys — and if I only play these notes, whatever I do, it’ll sound “good”” (might be tasteless tho)
-Expanding this to a broader context, I discovered (I think also on HN) a website called guitar dashboard
Whenever I started composing, I’d open up guitar dashboard, select a key (the letters in the middle column), and play only chords from the circle that changed with the key I chose.
(To play the chords in the key, look at the outer ring of the circle.
[EDIT 22/3, thank you Rohit Kumar]
> Green is for a major chord (big Roman numeral),
> Blue is for a minor chord (small Roman numeral)
> Red is for a diminished chord (small Roman numeral).)
After recording that, I’d look at the tab part of the webpage — and play on top of those chords all the notes I could play (which are the ones that appear onscreen).
For adding, for example, piano, I’d just note down which notes were half a step up or down — and play them, improvising over the chords until I found something I liked.
Tempo is a musical term for the pace, or speed of a piece.
It took me a while to learn to use a metronome. In my case this was particularly important. Since most times I would start out by recording my guitar solo, when I would add the drums nothing would fit together. I was playing completely out of tempo.
The instruments must play together — not each one by itself. When playing at the same time as other musicians, tempo’s will align, however, when recording one thing first, then a second one, and then a third, everything can go wrong very fast. I would record without a metronome.
I started figuring out the tempo of my songs. To do this I used a website like “tap for beats per minute”, sometimes a mobile app. I tapped along to the beats of my song and it would give me the bpm (beats per minute). I’d insert the bpm in my DAW and then start the metronome.
From there on forward I’d play to the tempo (For one track (Circles) I discovered it changed twice throughout the song (for the chorus) and had to figure out how to do that on the software) and then I would be able to add the drums and all other instruments without any major problems.
III) Doing whatever you want
You can make great songs sticking to just a key, and to basic theory, in a lot of ways — one of the songs (Controlo Emocional), my most basic track, was made following simple music theory rules. I just changed key once during the song for an “Outro” part. Changing keys is, well, literally that — changing from one key to another. This is most times done using a chord in common in between the two different keys.
However, once I got the hang of playing instruments in the same key, I mostly stopped using visual tools like guitar dashboard (that while good, kept me locked onto what I was seeing on my screen) and started playing more by feeling. I stopped being aware always of what key I was in (I don’t have the musical formation to always know this) but the key knowledge in the back of my brain helped me stay good sounding within reason, while allowing me to make mistakes, and keep those accidental mistakes to stir the music. I’d change the key accidentally, but roll with it. I started being able to adapt myself to the key changes, and with this, being able to improvise ( to define ) notes with instruments over my recorded foundation. This was particularly helpful to grasp on the MIDI Keyboard, because I was able to play any digital instrument with it, and only having to get used to playing keyboard notes along with the song, and not every instrument I used
What I mean by all this is that musical theory took me through playing within a set of rules, but I kept changing up the situation those rules were applied in. I did whatever I wanted and some of these basics helped me keep up with my changing and experimenting mind.
When I first started including drums into my tracks I was playing an electric drum kit, and plugging the output into my audio interface.
Wait: How did I learn to play the drums?
Three years ago, in High School, me and some friends formed a band. I was playing mostly chords on the guitar, however, during some breaks in the studio the drummer taught me some basics:
“Keep the rhythm by hitting the hi-hat… 1.. 2.. 3.. 4..
Hit the kick on 1, hit the snare on 3”
(I’m sure if you google about drum basics you’ll find more than enough videos to give you something to hold on to, so you can start playing!)
The other times, on breaks, as soon as they would leave the room to smoke I’d drop my guitar and move to the drums, and practice randomly until they arrived — I’d show her what I accomplished, she’d say “terrible — and give me some new small tip”. This did NOT happen a lot of times. Practicing makes you more at ease very fast — but even at the start I found out it wasn’t impossible to play drums — no “superhuman coordination skills” — I mean, you probably do, to be a pro, but to get playing, mostly everyone would be able to grasp it fast enough. After I left the band I didn’t play drums for quite some time. (BTW, there’s nothing quite like playing on a real drum kit. The sound, the power…)
Then, i think it was Christmas, 2018, my cousin gave me his electronic drum kit — I started recording the guitar and playing along to it — whatever way I could.
Coming back to the first sentence, I stopped using the electronic drum kit in the final recordings. It had a few problems:
1) The sound quality wasn’t the best,
2) It was hard keeping up with the small tempo variations recorded for the other instruments
Despite recording with a metronome, sometimes fluctuations happened and I didn’t notice them. I figured I needed to record the drums first — and the rest on top. However not being the best drummer it was hard for me to play without any other instrumental reference / guide.
My solution — I wanted to keep the drum kit freedom — but make it usable. From a certain point on forward, I started playing along the song on the physical drum kit until I was set on a beat, and then, I would open up the midi track editor and recreate beat by beat (at first) what I had played live. Sometime later on, with the midi keyboard I would play all the drum hits I could with my fingers (each key is assigned to a type of beat) and whatever I couldn’t play at the same time on the keyboard I’d add later.
So now, after having added the drums — I would mute all the tracks that helped me “create” the drums, and replay them all — this time to tempo and sound of the digital drum kit.
This is an attempt I saved of what I mentioned above — playing drums on top of the recorded guitars — and you can kind of hear that it wasn’t working out. And why this comes before the singing chapter
It’s also a great display of progress — this song made the album, can you tell which one it is?
(And yes, that’s what I named it after finishing: terriblebutatakenonetheless.mp3)
As everything else, this is subjective, but it’s harder to convince yourself that you sound good. So this chapter is about how I got better.
Just like the other chapters, this one involves practicing a lot. Up until some point I got way better by just singing consciously along to songs or to my own melodies, you have to become aware of your singing being the same note as what’s playing — just a little concentration is required.
However, big improvements started when I began getting more comfortable with my voice, and stopped performing as if I was someone else.
This comfortableness came with more calculated practice which I conducted through a book called Set Your Voice Free by Roger Love.
I really have enjoyed the book until now (I haven’t finished it yet!), and what I read helped me a lot.
Major breakthroughs include:
a) Diaphragmatic Breathing
Breathing is the foundation for singing. Master your breath to get much farther.
Diaphragmatic breathing, sometimes called belly breathing, is a deep breathing technique that engages your diaphragm, a dome-shaped sheet of muscle at the bottom of your ribcage that is primarily responsible for respiratory function.
When you inhale, the diaphragm contracts and moves downward. This movement sets off a cascade of events. The lungs expand, creating negative pressure that drives air in through the nose and mouth, filling the lungs with air.
When you exhale, the diaphragm muscles relax and move upwards, which drives air out of the lungs through your breath.
(Read more about this breathing here: How To Breathe With Your Belly)
This type of breathing allows you to have way more control over your diaphragm, and therefore over the amount of air you use when singing.
So I practiced this breathing until it became pretty much natural.
With this newly found control, I learned that for a better singing sound I should release air in a fluid, continuous motion.
-To avoid bursting out air for each word, but instead, keeping a constant output of air while singing the verse.
b) Technical practice
I realised how much I benefited from practicing vocal exercises, technical exercises as opposed to just singing songs.
There are a handful of exercises of these online. More specific ones might differ from male to female — our vocal ranges are different.
(As I only had contact with exercises from the book, I don’t want to link any other, for it’s unknown to me)
I started practicing with these exercises.
The process is mildly dull however you do notice results soon after.
Practice, practice, practice…
c) Warm up before Recording
At first I’d jump right into singing, and get mildly frustrated easily, as I would not be able to have confidence and control over what I had just started singing.
At some point in time I read somewhere about how important warming up was. I started doing it.
Doing some vocal exercises / warm ups before singing made me unblock my voice more (it gets rusty overnight), and after them, singing my song sounded way more effortless and less frustrating.
I used some youtube videos before I started reading the book, then I would use the same book vocal exercises that I also used for practicing.
(Here are those videos (they worked well enough for me): 5 MINUTE VOCAL WARMUP (there are longer versions xd))
Software instruments — Custom sounding synths, recreating instruments I don’t have.
Nowadays there is a gigantic collection of sounds to use as a sampled instrument. Logic Pro comes with loads of those.
The big breakthrough here was the overall discovery of their existence and usage.
I included a few digital tracks besides the drums in most songs on the album.
The harpsichord in Reverse Glimpse
The flute, harp, nylon piano and organ in Too Late
In the beginning I would play them on the computer keyboard (you press cmd+k on a logic digital track for an emulated keyboard)
But further and ultimately I used the MIDI Keyboard mentioned in the Recording section.
For the track Reverse Glimpse I defined a sequence of keys to play and then looped them.
For Too Late, using up what I had learned in (2) Music Theory about keys, I improvised the flute and harp on top of the song — the first take made it to the final version.
PS: They’re very useful for making sounds at game jams, where I usually don’t have any instruments with me
Oh boy… even writing this is hard.
I went through so many guides on mixing. So many videos. It would be impossible to go over all the information I consumed. However I’ll add here some basics/breakthroughs I now stick to. I’m currently reading a book on mixing and I’m sure that in a few weeks I could write more about this. For that reason, I’m going to go over the fundamental things I believed at the time of the recording, and were what shaped the released songs. This could be enough for you to get started on mixing your own songs, but not much more — again, I am not a professional!
Extra: It’s very helpful to define your own workflow and use it as a base for mixing your tracks.
a) Make sure all your tracks aren’t clipping
Reference (1) Recording for what I mean.
b) Use your faders
Before anything else. Just setting each instrument to the best fitting volume in the whole mix will greatly improve how it sounds. Later on, when applying effects, you probably will need to correct the volumes and set them all good again. But setting them in the beginning once as well will improve your ability to analyse the sounds.
A strategy I used is to set all the faders (volumes) to 0 and bring them up slowly, one by one, and then adjust them, so they all sit together well.
Note: I won’t be able to explain what all of these “effects/plugins” are, the article would just go on and on forever. Google at your own risk about what EQ, Compressors, etc on your own. I will assume you’ll do the needed research if you want to understand this bit.
Each instrument produces a big range of frequencies, but you want some to stand out more than others. For example, a bass sound is defined by low frequencies.
All tracks mixed up will be “fighting” in the same frequency space.
c.1) Using EQ to get rid of undesired frequencies and a clearer sound.
I) High-pass filters
On the simplest level, a high-pass filter is just a filter (sometimes called a low-cut) that attenuates low frequencies below a certain cutoff frequency and allows frequencies above to pass
I used a high-pass used on instruments like guitars, vocals and other tracks that built up unimportant low frequencies that can be cut out. This does not means you should cut out all the low frequencies from every instrument except the bass and the drum kick, however, if your low-end / bass sounds are sounding muddy, this might be a solution to get a clearer bass and kick sound in the overall mix.
Additionally, you can identify problems with your sound and use EQ to fix them.
This might mean:
Frequencies that are giving unpleasant sounds, like clicks or high pitched noise — you can remove these with EQ by almost completely reducing those specific frequencies to nothing.
A muddy sound in a certain frequency range, for example, if two guitars are “fighting” for the same frequency range, you might apply a little bit of EQ reduction on those frequencies on one of the guitars, so the other one has a clearer sound
c.2) Using EQ to enhance your track
This step comes down more to practicing and knowing the music and what you want. You’re listening to your guitar but you think the high end could be a bit louder-so you do it with EQ.
What I mostly did, since I have no deep knowledge on frequencies, was start out from presets and then adjust them to sound more to what I wanted. It’s easier having a base of altered EQ. I must have tried all the different presets dozens of times. When learning it is about doing and trying. Experimenting. You learn through this
Compression is quite a hard topic to talk about. From my studies I understood that compression… compresses!
This means it compresses the louder (that exceed your threshold) sounds and puts them nearer the sound level of the rest of the track. It also compresses the lowest sounds and ups their volume a bit. The way it compresses is defined by how the knobs are turned. Here’s a nice explanation on compressors (The Beginner’s Guide to Compression) — you might want to investigate later on on how to use compressors in your DAW, however the way they work is universal.
I used compressors in most instruments to get a bit of a tighter sound — but not all, for not everything needs to be compressed unnecessarily.
I used it for two main purposes:
I) creating a space
Since some instruments are outputting signal directly into the audio interface, others, like microphones, capture the room sound, and there’s also digital instruments, a good way to create the feeling of “same room” is to add a little bit of the same reverb to all of the instruments. (Fine tuning for those that need more or less)
You can read more about this here: The Smart Way To Use Reverb In Your Mix
Another benefit is the simplicity and effectiveness of running all (or most) of your tracks through one reverb. It can instantly glue your tracks together giving your mix a subtly cohesive sound. This is great because it takes no time at all, is easy to setup, and can really improve the sound of your mix. If you’re dropping reverbs on individual tracks, there’s a good chance you’ll have lots of reverb types setup and you’ll potentially create a disjointed sound.
II) as an effect
I’m a fan of accentuated reverb on voices and instruments. This purpose has more to do with taste than with anything else. Add a reverb to an instrument — now turn stuff around.
Do you like what it sounds like ? Perfect — that’s done : Change more knobs or remove the reverb altogether — maybe that’s too much.
f) More effects
I used a lot of effects besides the more “base” ones mentioned above.
These effects were used by trial and error. They include:
In essence, repeating what you played but only after a delay. After going crazy on the knobs it can mean reversed sounds coming back to you or repeating so fast it sounds like an animal screaming. Weird stuff can happen — it’s a lot of fun.
I like the warm fuzzy sound distortion adds to an instrument. If used in very small doses it can add impact to your track
III) Everything else
Effects are fun. I experimented a lot with them. Turn one up, chain connect another one, add a third one, remove all but the last, rotate virtual knobs, experiment experiment…
Mastering, turns out, is not the same as mixing.
Mastering is the final step of audio post-production. The purpose of mastering is to balance sonic elements of a stereo mix and optimize playback across all systems and media formats. Traditionally, mastering is done using tools like equalization, compression, limiting and stereo enhancement.
It’s a production technic. After the tracks are mixed at volume levels (like -8db), they’re bounced, and then edited (the whole song, not instrument by instrument) to make them more loud (all songs are quite loud, staying a lot of time at around -0db), and shape the overall sound of the track.
I was quite surprised when I was suggested to handle the mastering of all my songs separately in the same file.
This means, bouncing each mixed song (and making sure it has some headspace (this means keeping it peaking at levels around -8db)), and then creating a new different project to handle them all together. This helps keeping all the tracks at a consistent volume and sound. This way you can also edit how the songs end and start in relation to each other. Overall shaping the album sound al together helps.
There’s no point in trying to get breakthroughs here because the following guide has just about everything I learned (because I learned from it).
I used a free template and based myself on a guide from a mixing website online: The 6 Life-Saving Tips For Mastering in Logic Pro X (the template is linked in the guide)
I really can’t tell it any better than that website, so, if you’re looking for how I did my mastering — just open up the guide.
Both Mastering and Mixing are highly complex topics, I had to research a lot and experiment with everything a lot. It is not “easy”.
For a few months I recorded songs, I mixed those songs. I removed all the mixing. Mixed them all over again. Removed all the tracks but the drums. Recorded everything again. Sang and sang and sang take after take. Mixed everything again. Re-mixed everything one more time.
I listened and listened and listened to my own songs thinking how unfinished they were.
Look at how many times I sang Controlo Emocional. The last version is probably not even the best — but I’ll keep improving and do the next song better.
I was too harsh on myself. It’s hard to be your own critic. It’s hard to judge your own work. Friends and family told me that it was good! I didn’t really listen and kept on reworking the same songs. I learned a lot, that’s true, but I could have been learning along new songs and new ideas.
It’s harder to be impartial when you’ve learned to the same thing over and over a million times. You get desensitised. Listen to everyone around you — and consider stopping. This does not mean to not try your best — it means be aware that you are doing your best, and that you will keep on learning, this is not the end.
Finally, when I was completely saturated I decided — this is nuts. I’m going around and around, I must finish this and move on to the next step, and then to the next project.
And so I gave myself a deadline. I had two weeks to finish the album. So I did. It’s not perfect — I’d have to learn forever it to be perfect (and also have way better equipment) — but I’ll learn along other projects. Sticking to that would have no benefit.
Also, the less time you take mixing and mastering, the less you grow tired of the song you’re working on, resulting in better judgment.
At this point I’m going through what I particularly did, because there’s not much I can teach about photo taking. The only thing I learned is that I like posterization a lot.
(Posterization or posterisation of an image entails conversion of a continuous gradation of tone to several regions of fewer tones, with abrupt changes from one tone to another. This was originally done with photographic processes to create posters.)
For my album cover I took a picture. A particularly interesting picture — I stood upside down holding a chair. Further down is the original picture.
Then I applied posterization and changed a bit the colours of the picture, to highlight the bananas and the chair what I was able to.
This is the finished result, aka my album cover
And the following picture is the original foto
I honestly don’t like the end result that much. I think the beginning is way too slow, and that the rest gets quite better.
For my music video I did nothing more than recording random things on my normal iPhone 6. I recorded in Peniche (the sea images), in Alcochete and some other random places.
I added a lot of filters, repetition and reverse to the footage I had.
This was the end result:
EDIT: I created one more videoclip (21/3).
For this one I did 600 different frames on Photoshop. I started with a black screen and kept adding stuff and saving almost every “change” as a frame. I just experimented along the whole way. I used my mouse to draw and only the default presets the software had.
The result is at least interesting:
To get published with a streaming platform you must have a contract with it.
1. You may make this contract yourself — which is quite complicated and you probably require a lawyer to do it for you. It’s also not so obvious as to how to do it, I just read it was possible.
2. You can join a label — a label will handle distribution and marketing for you.
3. You can sign up at a distributor — this is what I did.
Distributors… distribute your music. It’s an awesome way for independent artists to release their songs with almost no trouble whatsoever.
I used one of the distributors recommended by Spotify — DistroKid. They release your music to all platforms in their program.
I researched a bit about distributors before deciding which one to take.
DistroKid’s cheapest plan lets you release as many songs as you like for 20$ a year. This plan doesn’t allow you to set a release date for the song, the next more expensive one does (I’m using the cheapest).
As long as you keep paying your anual fee, your songs will remain live. However if you want to, you can pay “Legacy” for an album or song to stay forever in stores independently from the annual fee. DistroKid doesn’t charge streaming/downloads fees.
CDBaby is the second option I considered for releasing my album. CDBaby charges per album (something like the legacy distrokid fee), however covers a small fee from streams and downloads.
After checking out both options (and some others) I decided to go with DistroKid
I uploaded my songs to their platform. After 7 days they said it was ready to go. 3 days later I received an email saying I was added to AppleMusic and some hours later that I was added to Spotify!
Side Note: DistroKid has a referral program. If anyone is planning on signing up with distorted I’d be really happy if you used my referral link. It’s a 7% discount and I win 5$ on the platform (which would be great for me to reduce costs): My DistroKid Referral Link
Promoting your work is quite the hard bit.
Few days before releasing the album I published the album art on my social network, and the video music for circles, on instagram, twitter and facebook.
When I released the album I shared it and asked some friends who told me they liked it to share.
Some of my friends did, and posted on their own stories/profiles.
My dad also posted the album on facebook (and got a reply from one of my university professors! — they had studied together in college ahah)
Overall 60 people listened to my album. Some 5–10 listeners liked and saved my stuff. 30 follows on Spotify from these friends.
But my objective was not to get direct listens, but yes listeners who came to me suggested by playlists / algorithms.
I republished a track on twitter with the hashtag #newmusic and because I got 10 likes I was added to a playlist called #listige (???)
I posted on some subreddits related to music and also at r/addmetospotify that added me to a playlist of reddit musicians.
I’m a solo composer, producer and musician from Portugal, I releas —
I wrote emails. Sent about 10–15 emails to blogs and people.
I know that is nothing. I was writing a different email every time I found out about someplace to send my music to. I decided to write this article first and then I’ll write one smart email to send it to way more places more easily.
(BTW, I didn’t get one reply from those I sent.)
Promotion is difficult. Time and money is required. I’ve got time…
Besides, you need to have something really good.
I thought about using ads. Then I thought — maybe for the next album — I’m not sure I’m confident enough in this one.
What a journey.
It’s crazy how much time I put into learning and working for this. I’d do it all over again (and I will — I want to create more music, more albums).
A lot of time spent inside my room. My parents and siblings heard me sing the same sentences over and over countless times.
I’m studying computer science in college, I would record at weekends or at night. I got frustrated a lot along the way — it was NOT easy.
However, I took my time — there is no rush to finish your side project (besides getting tired of it) — after all, your side project keeps you going, and you keep your side project going. It’s a two sided relationship but they’re both you.
After these months cultivating this project I’m now ready, just after I’m done sending the last emails, to call it *complete* — and start a new one.
…I’m thinking about a game… really focused on the soundtrack…!
Thank you for reading, Romes.
P.S: If you have any questions I’d be happy to answer them!