The subject of the course: Introduction skills to the general principles of sound production
Practical importance of course work. It serves to effectively use the ideas, approaches and results of the course work, which ensure their effectiveness, in the preparation of lectures on pedagogical subjects, in the creation of manuals, as well as in the creation of methodological recommendations, in popularizing work experiences.
The structural structure and volume of the course work: the work consists of an introduction, 2 chapters, 4 sections, general conclusions and recommendations, a list of used literature.
Chapter I.Characteristics of types of sound recording
1.1.There are two main types of sound recording: analog and digital.
The purpose of this course work is: to study the features of sound recording with its subsequent processing.
Tasks set:
Learn all the features of sound recording;
Process the existing sound file using the adobe audition program.
Nowadays, we are increasingly confronted with digital audio. We download music from the Internet, use portable mp3 players, put our favorite ringtone on a mobile phone. A significant part of modern music is created with the help of a computer. You no longer need a large number of musicians and instruments, because they can be replaced by just one person and the appropriate software for synthesizing and sound processing. Even a person far from creating music is increasingly faced with the possibility of sound processing. For example, to put the chorus of a memorable song on a phone call or when creating a slideshow, it is useful to make a “cut” from the music that suits your mood.
Features of computer sound recording
Sound recording is the process of recording sound.
High-quality sound recording requires a professional microphone, an acoustically suitable room, a sound editor, and a high-quality sound card.
Microphone and sound recording
It is very important to choose the right type of microphone in order to get a good recording. There is no need to save on a microphone, since the presence of extraneous noise is difficult to correct without distorting the useful signal. Noises are the result of: an incorrectly chosen type of microphone, poor quality of the microphone (sensitivity, signal-to-noise ratio) and lack of wind protection (in the case of recording in a studio - protection from the explosive consonants of the vocalist). Also, noise can be the result of incorrect recording techniques, recording sound in a room with poor acoustics, poor soundproofing of the room from external noise. All this can be corrected by available means. Today, at a fairly affordable price, you can buy high-quality stereo microphones. Stereo recording compares favorably with mono recordings because when picking up sound from two microphones, greater realism is achieved, and hence the quality of sound recording. Stereo microphones are mostly condenser microphones and are designed for indoor recording. However, nothing prevents you from using stereo microphones for sound recording outside.
For this you need:
1) Recording device (laptop).
2) Phantom power supply
3) Amplifier + switching wires, wind protection.
The main problem is that the high cost is often much more than the cost of the highest quality stereo microphone. There is an alternative solution - portable stereo recorders such as the zoom h4n. They are convenient in that they have their own power supply, memory cards and allow you to record sound with a sampling rate of up to 96000.
Everyone likes their own microphone, so in this case you should rely only on your own taste.
When recording sound (be it musical or not), it is worth considering that along with the right microphone for sound recording in the home, its correct location relative to the sound source is just as important. Of course, this is not the end of the mics themselves. The choice of microphone depends not only on the location of the recording, but also on what needs to be recorded. So, different microphones are better suited for recording different musical instruments and sound sources (vocals, drum kit, orchestra, etc.).
As for the sound recording technique, there is also a direct dependence on what sound the person prefers to record, and on what instrument will be recorded. This way you can achieve the effect of proximity by placing the microphone directly in front of the vocalist. It is possible to record a guitar with different recording techniques, obtaining slightly different timbres of one recorded instrument.
If it is necessary to record sounds in nature, then in 90% of cases narrowly directed microphones (gun microphones) are best suited, which also require a good wind protection system. Binaural recordings are recordings that are made by two microphones placed in a model of a human head, or rather, in the auricles.
Due to the characteristic reflection of sound, the minimal difference between the recordings from the right and left ears - very natural recordings are achieved. Listening to binaural recordings in headphones often gives the impression of the complete presence of the sound source in the room. However, this recording method has its own problems - phase distortions that will appear when converting stereo to mono (which can certainly happen when the recording / composition is further used by the listener / consumer).
Premises and sound recording
In addition to the useful signal, background noise is present in the recording. If this noise is the result of a breath of wind, then you can get rid of it with special protection for the microphone. If the sound is recorded in the studio and nearby, for example, there are unwanted sound sources, then soundproofing is necessary (although when choosing the location of the studio, you should always carefully study the noise picture of the surrounding space, if there are any serious noise sources that can interfere with further work).
Soundproofing is as much an art as choosing the best microphone/recording technique. Most often, soundproofing is done to absorb street noise. Sometimes a second wall is placed, sometimes the walls are simply treated with sound-absorbing materials. It often makes sense to replace doors and windows. What exactly needs to be done depends on each specific case and possibilities.
In addition to the problem of external noise, there is also the problem of the acoustics of the room itself, in which sound recording will be made.
Each room has its own acoustics - the characteristic reflections of a sound wave from the surfaces of the room. Be it reverb, echo, flutter echo.
Different rooms react differently to different sound sources. When choosing one here again, everything is relative. To record sounds, it is best to reduce sound reflections to nothing (anechoic chamber).
At the entrance to such premises, it is usually pawned in the ears. In general, the fight against sound reflections is done using two methods: 1) Increasing the area of absorbing surfaces 2) Adding forms that crush sound waves.
In the first case, additional sound-absorbing material reduces the level of wave reflections from surfaces (by how much and at what frequencies depends on the material, see sound absorption table).
In the second case, various forms are added to the room, which separate the sound waves into smaller ones (the sound wave is partially reflected in different directions), which are much faster absorbed by the surrounding space.
Often the best solution is a combination of the two methods.
As for home recording, if it is not possible to buy special materials to improve the acoustics of the room, you can use everyday items (clothes hanger, blankets, mattresses, etc.).
This is, of course, an unprofessional solution - but it still allows you to slightly improve the acoustics of the room, which means that this is also a solution.
To record musical instruments, speech and vocals, it is not at all necessary to drown out the room completely. The presence of reverberation in the recording is perceived naturally, and if you record for yourself (that is, it does not mean that someone else will artificially add reverb later), then you need to decide for yourself whether the current acoustics of the room suits you. When damping a room, it is important that all frequencies are equally damped (not just a certain frequency range).
The placement of sound absorption elements and their choice is a separate issue. To get high-quality sound recording at home, you just need to think about improving the acoustics of the room so that there are no unnecessary overtones and too many / few reflections in it.
Quality sound recording
Of course, you will need a program to work with records. Usually this is one of the sound editors (Adobe audition, audacity, Sound forge, wavelab). However, if you wish, you can use the built-in audio editors prof. sequencers (cubase, logic, etc.).
After studying the opinions of experts and testing it myself, I prefer to use sound editors, in particular adobe audition, since they have much more noise control tools. So, a simple but effective way of basic recording processing in the adobe audition program:
First you need to make a record and open it in the program. Next, a section of the recording is searched for, in which there is nothing but background noise. The desired area is selected and the following command is set in the main menu: Effects-Noise reduction - Noise reduction
Then the following window will open:
Noise reduction
Select the "Get profile from selection" item, then save the profile to a separate file. Next, just close this window and select the entire file (ctrl + A), open our window again through the main menu: Effects-Noise reduction - Noise reduction
And load a previously saved profile. You need to adjust the level of noise removal. It is important that the recording does not leave the characteristic overtones of noise removal, which can be compared with the overtones of low bitrate recordings. Often such overtones are the result of an incorrect choice of noise profile. However, in most cases, this method allows you to get rid of the noise, which can be verified by looking at the spectrogram: view - spectral view.
At home, without the use of tools for cleaning recordings from noise, it is difficult to obtain high-quality sound recordings, since these noises will most likely become more noticeable in the future when the dynamic range is narrowed (compression).
It is also important to have the best possible quality headphones/speakers (monitors) to listen to the results of the recording. Without them, it is difficult to assess how well the sound was recorded. Of course, a professional sound card is just as important. To understand what it is about, you just need to listen to the sound on various sound cards, somewhere you can hear noise and the sound is muddy, somewhere the sound is simply amazing in its transparency. The same records sound completely different.
1.2.Premises and sound recording
Well, in the days of analogue, recording with hot signals really was the best way to achieve a high signal to noise ratio. That’s because signals would be recorded through multiple pieces of analogue equipment. Each of these had a greater noise floor than the kind we experience today in digital recording. So recording with hot signals was a necessary part of maintaining a high signal to noise ratio throughout the signal chain. This remained best practice in 16-bit digital recording too, due to its relatively limited dynamic range.
When it comes to modern day 24-bit digital recording however, things are very different. You don’t need to get the signal particularly hot to overcome the noise floor. As such, you can achieve an ample signal to noise ratio at a far more conservative level. So with 24-bit digital recording, the signal doesn’t need to be anywhere near clipping to achieve proper audio recording levels.
The Risk Of Recording Too Hot:
There is also a risk that comes with trying to record really loudly in the digital domain. One that didn’t exist in analogue. You see, analogue systems offered you a sort of safety net. You could comfortably allow the signal to reach 0VU. Because even at 0VU, there was still a lot of headroom left to accommodate an unexpected spike in volume at the recording source. With digital recording however, if you reach 0dBFS, there’s no headroom available. So recording just beneath the level of clipping becomes risky. Because now, without headroom to accommodate it, that unexpected spike in volume will cause the signal to clip.
There’s Really No Benefit To Recording So Hot If You Want To Set Proper Audio Recording Levels:
There was another benefit to recording hot signals in the analogue world which really doesn’t carry over to digital. Recording with high gain levels in analogue could sound really good. The tape would saturate and sound great if you got it just right. But with digital, there’s no desirable sound to be achieved by recording with a hot signal. The sound doesn’t change. As such, recording loud won’t make the signal sound any better. So there really is nothing to be gained from recording really loud in the digital world.
How Loud Should You Record To Achieve Proper Audio Recording Levels?
So whats the answer? Well, it’s a great idea to try and reintroduce the headroom that analogue recording used to provide. A great spot to aim for in digital recording is to have your loudest parts peaking no higher than -18dBFS. This will leave you with plenty of headroom to accommodate any unexpected spikes in volume.
This might look low on the metering in your DAW. But remember, that’s because with digital, the entire usable limit is on show on your meter. So you have to allow some of it to act as headroom. The fact of the matter is that you should still achieve an ample signal to noise ratio by aiming for this level. But now you’ll also gain the advantage of the safety net that analogue recording used to provide. Try this approach to setting proper audio recording levels on your next recording session and let me know the results!
Chapter II. General principles of sound production and problems that arise
2.1. General principles of sound production
Mechanisms of speech sound production make vowels different from consonants. There are 4 main mechanisms that take part in the production of consonants: the power mechanism, the vibrator mechanism, the resonator mechanism and the obstructer mechanism. But the obstructer mechanism doesn’t take part in the production of vowels. From the auditory point of view a vowel is voice or tone, and the obstructer mechanism creates noise.
The leading mechanism for the production of vowels is the resonator mechanism.
The leading mechanism for the production of consonants is the obstructer mechanism.
Power mechanism
It differentiates the force of exhalation (which is weaker for vowels in comparison with consonants) and produces the air stream. The air stream makes the vocal cords vibrate and fills the resonators with air to produce sounds. The PM also differentiates muscular tension and takes part in the mechanism of aspiration (see the vibrator mechanism).
Vowels. 1) The muscular tension is diffused. 2) The PM modifies the force of the stream of air, accordingly, we differentiate between tense / i:, u:/ and lax /all the rest/ vowels.
Consonants. 1) The muscular tension is concentrated in the point of articulation. 2) The PM makes the vocal cords vibrate for voiced consonants, sonorants and semi-vowels and keeps the vocal cords apart for voiceless consonants. 3) According to the force of exhalation, consonants are subdivided into fortis /all voiceless consonants: p, t, k, h, etc. = we need more effort to produce them!/ and lenis /all voiced consonants: b, d, etc./
Vibrator mechanism
It is responsible for the vibration of the vocal cords which gives voice or tone. It also differentiates duration of vibration of the vocal cords.
Vowels. According to the duration of vibration of the vocal cords, we single out long / i:. u:, a:, o:, ә:/ and short /all the rest/ vowels. The length is historical.
According to the speed (=rate) of vibration of the vocal cords, there are vowels different in pitch. The more rapid the rate of vibration, the higher is the pitch.
Consonants. 1) The VM creates voice for voiced consonants, sonorants, semi-vowels. 2) The VM gives the mechanism of aspiration. Aspiration is an additional puff of air for some consonant. There appear postponed vibrations after the release of / p, t, k, t∫ /, the vocal cords don’t start to vibrate immediately. There’s a gap within which a puff of air is given to the consonants. And only after the puff of air has been supplied, the vocal cords start vibrating for the following vowel. So there are aspirated and non-aspirated consonants.
Resonator mechanism
Vowels. !leading mechanism! In the production of vowels it involves the work of the mouth resonator only. The RM is the leading mechanism in the production of vowels as it specifies tone creating different vowels. The mouth resonator changes its shape, size and volume making all vowels different. The position of the lips and the movements of the tongue modify the shape, size and volume of the mouth resonator.
According to the horizontal movement of the tongue, the English vowels fall into:
- front / i:, e, æ /
- front-retracted / I /
- central / ә , ә: /
- back / o, o:, u:/
- back-advanced / a:, u, /\ /
According to the vertical movement of the tongue, the English vowels are subdivided into 3 classes. Each of these classes is further subdivided into 2 variations – narrow and broad.
|
Close = high
|
half-open = mid
|
open = low
|
narrow
|
i: , u:
|
e , ә:
|
o: , /\
|
broad
|
I , u
|
ә
|
æ , a: , o
|
According to the position of the lips, the English vowels are classified into rounded / o, o:, u, u: / and unrounded /all the rest/.
According to the stability of articulation (= the stability of the shape, size and volume of the mouth resonator) the English vowels are divided into: 1) 10 monophthongs (the mouth resonator doesn’t change its properties), 2) 8 diphthongs (during their pronunciation the tongue and the lips move from one vowel position to another with the result that the properties of the mouth resonator change) and 3) 2 diphthongoids / i:, u:/ (diphthongoid is a vowel sound intermediate in character between a monophthong and a diphthong. Its elements are very close to each other, and the tongue and/ or the lips move an extremely short distance between them).
Consonants. In the production of consonants the RM can involve the work of 2 resonators: mouth resonator or nasal resonator. It provides the possibility for changing of the resonator (its shape, size, volume) because the soft palate can be raised (oral sounds) or lowered (nasal sounds).
Oral consonants: all voiced and voiceless consonants + 1 sonorant / l /
Phonetics is the science that studies the sound matter of the language, its semantic functions and the lines of development. Phonetics as a branch of linguistics studies sounds in their broad sense. The sound system of the l-ge comprises 2 levels: segmental sounds (vowels and consonants) and prosodic (suprasegmental) phenomena (pitch, stress, tempo, rhythm, pauses). Ph. also studies how the sounds are produced. It studies the acoustic property of sounds. Ph has a long history. It was known to ancient Greeks. Asa science with its subject and methods it began to develop in the 2nd half of the 19 century in Western Europe and in Russia. Depending on which of sound phenomena is studied, phonetics is subdivided into four main branches: articulatory, acoustic, auditory, functional. Articulatory-how the air starts moving, all the movement of speech organs while producing sounds. It is concerned with the study of sound as a result of the activities of speech organs. It deals with our voice—producing mechanism and the way we produce sounds, and prosodic phenomena. It studies respiration, phonation (voice—production), articulation and also the mental processes necessary for the mastery of a phonetic system. Methods employed in articulatory phonetics are experimental and the method of direct observation. Acoustic deals with acoustic aspect of sounds, it studies how the air vibrates between the speaker’s mouth and listener's ear. It studies speech sounds with the help of experimental (instrumental) methods. Auditory (perceptual) studies the hearing process, man's perception of segmental sounds, pitch variation, loudness and duration. It studies the ways in which sound perception is determined by the phonetic system of a language. The methods used in perceptual phonetics are also experimental. They include various kinds of auditory tests. Functional (phonology, social) - it is a purely linguistic branch of ph, it deals with sound phenomena. This aspect was 1st introduced in the works by Russian linguist де Куртене. He was the founder of the phoneme theory. Later this theory was developed by Russian Щерба, Реформатский. They even claimed that phonology should be differentiated from phonetics. They were for that separation because they considered ph to be a biological science while only phonology could be described as a linguistic science. But the linguists of other countries disagree with division. It is not logical to separate form from functionand exclude ph from linguistic sciences. The other branches of phonetics: special (concrete language), general (speech mechanism), historical (historical development), descriptive (in particular period), comparative (comparative study of ph sys of 2 lan-ges), applied/practical (practical applications), theoretical (theory).
Aspects of sound phenomena.
Sound phenomena have different aspects, which are closely interconnected: the articulatory aspect, theacoustic, the auditory and the linguistic. The articulatory (sound—production) aspect (speech sounds are products of human organs of speech: the diaphragm, the lungs, the bronchi, the trachea, the larynx with the vocal cords in it, the pharynx, the mouth cavity and the nasal cavity. Sound production is impossible without respiration, which consists of two phases — inspiration and expiration. Speech sounds are chiefly based on expiration, though in some African languages there are sounds produced by inspiration. Expiration, during which speech sounds are produced, is called phonic expiration. In speech, expiration lasts much longer than inspiration, whereas in quiet breathing inspiration and expiration take about the same period of time. One part of sound production is phonation – voice-production). The acoustic aspect (like any other sound of nature speech sounds exist in the form of sound waves and have the same physical properties — frequency, intensity, duration and spectrum. A sound wave is created by a vibration which may be periodic or non-periodic, simple or complex. The vocal cords vibrate in such a way that they produce various kinds of waves simultaneously. The basic vibrations of the vocal cords over their whole length produce the fundamental tone of voice. The simultaneous vibrations of each part of the vocal cords produce partial tones (overtones or harmonics). The number of vibrations per second is called frequency (hertz or cycles per second). Intensity of speech sounds depends on the amplitude of vibration. Changes in intensity are associated with stress in those languages which have dynamic stress. Intensity is measured in decibels. Like any other form of matter, sound exists and moves in time. Any sound has a certain duration. The duration of a sound is the quantity of time during which the same vibrations continue (is measured in milliseconds)). The auditory (sound—perception) aspect (speech sounds may also be analysed from the point of view of perception. The perception of speech sounds involves the activity of our hearing mechanism. Our hearing mechanism acts as a monitor of what we ourselves are saying. The process of communication would be impossible if the speaker himself did not hear the sounds he pronounces. If the link between listening and pronouncing is disturbed, disturbances in the production of speech sounds are likely to appear. The better we hear the differences between the sounds, the better we pronounce them). The linguistic aspect ( segmental sounds and prosodic features are linguistic phenomena. They constitute meaningful units — morphemes, words, word—forms, utterances. All the words of a language consist of speech sounds. Most of the meaningful distinctions of the language are based on distinctions in sound. Sounds and prosodic features serve to differentiate the units they form. Simultaneously, the sound phenomena enable the listener to identify them as concrete words, word—forms or utterances. Thus, segmental sounds and prosodic features of speech perform constitutive, distinctive and identificatory functions.
Principles of classification of speech sounds
In all languages speech sounds are traditionally divided into two main types — vowels and consonants. From the articulatory point of view the main principles of the division are as follows: the presence or absence of obstruction; the distribution of muscular tension; the force of the air stream coming from the lungs. Vowels are speech sounds based on voice. There is no obstruction in their articulation. The muscular tension is spread evenly throughout the speech organs. The force of the air stream is rather weak. Consonants are speech sounds in the articulation of which there is an obstruction (plosion or friction). The muscular tension is concentrated at the place of obstruction. The air stream is strong. The articulatory boundary between vowels and consonants is not well marked. There are speech sounds that occupy an intermediate positionbetween vowels and consonants and have common features with both the of them. These are sonorants [m, n, ŋ, j, I, w, r]. There is an obstruction in their articulation and the muscular tension is concentrated at the place of obstruction as in the production of consonants. Like vowels they are based on voice. The force of the air is weak as in the case of vowels. Due to their great sonority some sonorants can be syllabic in some particular positions. But generally sonorants do not perform the function of syllable formation. That is why they are attributed to consonants. From the acoustic point of view vowels are complex periodic vibrations — tones. Consonants are non—periodic vibrations —
2.2. Problems that arise in the production of sound
Vowels are speech sounds based on voice. There is no obstruction in their articulation. The muscular tension is spread evenly throughout the speech organs. The force of the air stream is rather weakThe various qualities of English vowels are determined by the oral resonator — its size, volume and shape. The resonator - movable speech organs — the tongue and the lips. The position of the speech organs in the articulation of vowels may be kept for a variable period of time. All these factors determine the principles according to which vowels are classified: acc to the horizontal movement of the tongue: front (i:, ǽ, e, ei, eə, ai, au), front—retracted(I, iə), mixed (Λ, ə, 3:, əu), b a c k—advanced (u, uə), b a c k (u:, a:, uə, oi, o:, o); acc to the vertical movement of the tongue: close/high (i:, I, iə, u:, u, uə), mid (e, ei, eə, ə, 3:, Λ), open/low(ǽ, a:, ai, au, o:, o); acc to the position of the lips: rounded (o, o:, u, u:),unrounded (all the rest);acc to the degree of muscular tension: tense (long vowels), I a x (short); acc to the force of articulation at the end of the vowel: checked (historically short vowels under stress), free (long monophtongs and diphtongoids, unstressed vowel); acc to the stability of articulation: monophthongs (I,e, ǽ, a:, Λ, o, o:, u, ə), diphthongs (ei, ai, oi, au, ou, iə, eə, oə, uə), d i p h thongoids(i:, u:); acc to the length long, short.
Do'stlaringiz bilan baham: |