Friday, 23 June 2017

Btec video


     Intellectual property: 
  1. A work or invention that is the result of creativity, such as a manuscript or a design, to which one has rights and for which one may apply for a patent, copyright, and trademark.


    Public Domain:
    Music is considered to be in the public domain if it meets any of the following criteria, All rights have expired,The authors have explicitly put a work into the public domain and There never were copyrights.In the U.S, any musical works published before 1922, in addition to those voluntarily placed in public domain, exist in the public domain. In most other countries, music generally enters the public domain in a period of fifty to seventy-five years after the artists' death.

    Music licensing:  is the licensed use of copyrighted music. Music licensing is intended to ensure that the owners of copyrights on musical works are compensated for certain uses of their work. A purchaser has limited rights to use the work without a separate agreement.

    MCPS-PRS 
    PRS pays royalties to its members when their works are: broadcast on TV or radio, performed or played in public, whether live or through a recording streamed or downloaded. MCPS pays royalties to its members when their music is: copied as physical products, such as CDs and DVDs, streamed or downloaded used in TV, film or radio.

    Monitor and control:
    VU meter is an audio metering device. It is designed to visually measure the "loudness" of an audio signal. The VU meter was developed in the late 1930s to help standardise transmissions over telephone lines. It went on to become a standard metering tool throughout the audio industry. VU meters measure average sound levels and are designed to represent the way human ears perceive volume.

    PPM:
    A Peak Program Monitor, sometimes referred to as a Peak Reading Meter, is an audio metering device. It's general function is similar to a VU meter but there are some important differences.The rise time of a PPM is much faster than a VU meter, typically 10 milliseconds compared to 300 milliseconds. This makes transient peaks easier to measure.
    dBs:
    Definition and examples. The decibel ( dB) is used to measure sound level, but it is also widely used in electronics, signals and communication. The dB is a logarithmic way of describing a ratio. The ratio may be power, sound pressure, voltage or intensity or several other things.

    SMPTE: 
    Timecodes are added to filmvideo or audio material, and have also been adapted to synchronize music. They provide a time reference for editing, synchronisation and identification. Timecode is a form of media metadata. The invention of timecode made modern videotape editing possible, and led eventually to the creation of non-linear editing systems.
    Documentation and storage: 
    Labelling work and data well means you are able to find work easily if it is needed, this means that in the future finding previous work is very accessible . Backing up data in an external hard drive is necessary to make sure work is not stored in one location and therefore not lost in if one location is damaged. 

Thursday, 22 June 2017

Mixing and Editing

Mixing audio for:

Radio: . Radio will  play multiple premixed tracks. This audio is usually compressed quite heavily compared to other forms of audio such as sound for games. EQ is also used, EQ in a broadcast processor has dual uses. The first being to help shape a unique Audio Signature by boosting or cutting select frequencies.  Second, to help compensate for the standard 15kHz FM roll off that can cause off-air audio to sound dull compared to CD’s and digital formats where audio is flat out to 20kHz. Radio edits might also be needed, a radio edit is a modification to make a song more suitable for airplay, whether it be adjusted for length, profanity, subject matter, instrumentation, or form. Radio edits may also be used for commercial single versions.

Music: Mixing audio for Music is complex, the music is normally split into separate tracks. These tracks are blended using various processes such as EQ, Compression and Reverb. The goal of mixing is to bring out the best in your multi-track recording by adjusting levels, panning, and time-based effects.


Games: Mixing audio for games is very similar to mixing music in the fact that music and soundtracks are used within games, also processes such as using EQ, compression, and reverb will be very similar. However Games require sound effects similar to the use of Foley in films, to mix this the use of panning will be extensive as the audio will need to give a sense of wear the object is in the game. The use of volume control and panning will also be used a lot when a certain ambience is wanted to be achieved.

Live sound: In live sound individual instruments or vices are also split into tracks. These tracks are blended using various processes such as EQ and Reverb. The goal of mixing is to bring out the best of the live sound, especially by adjusting levels and EQ so each component can be heard and has its own space within the live sound, this ensures that each instrument is heard by everyone at the event.

Mixing for record release: When mixing a song for record release the track will need to be no louder than -6 db, this is because mastering technicians will need enough space in the mix to be able to go through their process.

Production possibilities: Different structures of songs may be made during the mixing and editing stage, these differently structured songs might be decided that they are infused, this would therefore require editing together for the final decided structure.

Live sound: For live sound, a sound check prior to the performance would be needed to ensure a good mix of all the instruments which also suits the room, mixing will also need to be done whilst in the show as a full room of people will sound differently to before any one was in there.

Analogue: When Mixing for analogue such as Vinyl records, bass guitar or any bass frequencies are made to be mono this is because stereo information down in the lower frequencies would make the needle jump out of the groove.

Compression and Equalisation: Compression is used to level out the dynamic range of individual tracks such as vocals and drums, this levels out the volume of the whole mix. Equalisation is used to give instruments their own frequency space in the track, this means that each instrument can be heard clearly instead of the mix being muddy.  

Reverberation:
Reverberation occurs naturally when a person sings, talks, or plays an instrument acoustically in a hall or performance space with sound-reflective surfaces. The sound of reverberation is often electronically added to the vocals of singers and to musical instruments. This is done in both live sound systems and sound recordings by using effects units. Effects units that are specialised in the generation of the reverberation effect are commonly called reverbs. 

Synchronisation with video: Audio for projects such as animations will need to be edited so the speech is synchronised with movement and actions in the video. Can also be used for foley as sound will need to be synchronised for when actions on screen are happening, for example, the sound of a door shutting. 


Sequencing software:music sequencer is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/GateMIDI, or Open Sound Control, and possibly audio and automation data for DAWs and plug-ins.

MIDI, Synthesises and Sampling: MIDI is a technical standard that describes a protocoldigital interface and connectors and allows a wide variety of electronic musical instrumentscomputers and other related devices to connect and communicate with one another. A single MIDI link can carry up to sixteen channels of information, each of which can be routed to a separate device. Synthesizers may either imitate instruments like pianoHammond organflutevocals; natural sounds like ocean waves, etc.; or generate new electronic timbres. In musicsampling is the act of taking a portion, or sample, of one sound recording and reusing it as an instrument or a sound recording in a different song or piece. 

Editing

Dance and Adverts: Music can be needed to be edited for dancing performances as dancers may want music to change at certain points in tracks to other tracks. Music can also be edited to be lengthened or shortened to accommodate the dancers needs. Editing might need to be used to shorten songs for music for adverts, the music might also be edited so main aspects of the song are included in a short amount of time.


Noise Gates: noise gate or gate is an electronic device or software that is used to control the volume of an audio signal. Comparable to a compressor, which attenuates signals above a threshold, noise gates attenuate signals that register below the threshold.

Linear editing: Linear video editing is a video editing post-production process of selecting, arranging and modifying images and sound in a predetermined, ordered sequence.

Non Linear: non-linear editing system is a video or audio editing digital audio workstation system that performs non-destructive editing on source material. The name is in contrast to 20th century methods of linear video editing and film editing. It is a form of audiovideo or image editing where the original content is not modified in the course of editing – instead the edits themselves are specified and modified by specialised software. A pointer-based playlist – effectively an edit decision list. Each time the edited audio, video, or image is rendered, played back, or accessed, it is reconstructed from the original source and the specified editing steps.

Thursday, 18 May 2017

Recording Portfolio

Polar Patterns:
polar pattern is a circular graph that shows how sensitive a microphone is in different directions. Each circular division represents 5dB of sensitivity, so you can see where the microphone picks up the strongest to the weakest sounds at different points.

Cardioid: 
Very popular for vocals at live events as doesn't pick up much background noise. Their unidirectional pick up makes for an effective isolation of unwanted ambient sound and high resistance to feedback


Supercardioid:           
Can be used for vocals at big live events as doesn't feedback very easily.


Omnidirectional:
Omnidirectional microphones are equally sensitive to sound arriving from all angles. Therefore, the microphone does not need to be aimed in any particular direction. This can be particularly useful when trying to capture a speakers voice, as the individual can move their head without affecting the sound.




Bidirectional:
It picks up the sound from in front of the microphone and from the rear but not the sides, this can be used for recording duets or interviews.






Shotgun: 

Very directional, can be used to pick up drums and cymbals.


Microphones:

Dynamic:A very thin diaphragm of Mylar or other material is attached to a coil of hair-thin copper wire. The coil is suspended in a magnetic field and, when sound vibrates the diaphragm, the coil moves up and down, creating a very small electrical current. Dynamic mics are usually used with high sound levels applications, like vocals or drums, as unlike condenser mics, they're less likely to overload when exposed to loud sounds.













Condenser Microphones
Another microphone type is a condenser mic, these work differently to dynamic microphones as they process the acoustic energy differently. In a condenser microphone, sound waves also strike a diaphragm causing it to vibrate, but, in this mic the diaphragm is in front of an electrically charged plate.The term condenser is actually obsolete but has stuck as the name for this type of microphone, which uses a capacitor to convert acoustical energy into electrical energy. Condenser microphones require power from a battery or external source. The resulting audio signal is stronger signal than that from a dynamic.



Ribbon Microphones
ribbon microphone, also known as a ribbon velocity microphone, is a type of microphone that uses a thin aluminiumduraluminum or nanofilm of electrically conductive ribbon placed between the poles of a magnet to produce a voltage by electromagnetic induction. Ribbon microphones are typically bidirectional, meaning that they pick up sounds equally well from either side of the microphone.




Carbon microphone

The carbon microphone, also known as carbon button microphonebutton microphone, or carbon transmitter, is a type of microphone, a transducer that converts sound to an electrical audio signal. It consists of two metal plates separated by granules of carbon. One plate is very thin and faces toward the speaking person, acting as a diaphragmSound waves striking the diaphragm cause it to vibrate, exerting a varying pressure on the granules, which in turn changes the electrical resistance between the plates.


Handheld/wireless Microphone
wireless microphone is a microphone without a physical cable connecting it directly to the sound recording or amplifying equipment with which it is associated. Also known as a radio microphone, it has a small, battery-powered radio transmitter in the microphone body, which transmits the audio signal from the microphone by radio waves to a nearby receiver unit, which recovers the audio.

Tie clip Microphone
A small microphone used for television, theatre, and public speaking applications in order to allow for hands-free operation. They are most commonly provided with small clips for attaching to collars, ties, or other clothing. The cord may be hidden by clothes and either run to a radio frequency transmitter kept in a pocket or clipped to a belt, or routed directly to the mixer or a recording device.

Boom Microphone
boom operator is an assistant of the production sound mixer. The principal responsibility of the boom operator is microphone placement, usually using a boom pole with a microphone attached to the end (called a boom mic), their aim being to hold the microphone as close to the actors or action as possible without allowing the microphone or boom pole to enter the camera's frame


Pre-recorded sources: 

DVD-  DVD  is a digital optical disc storage format invented and developed byPhilipsSonyToshiba, and Panasonic in 1995. The medium can store any kind of digital data and is widely used for software and other computer files as well as video programs watched using DVD players. DVDs offer higher storage capacity than compact discs while having the same dimensions.

CD- Compact disc (CD) is a digital optical disc data storage format released in 1982 and co-developed by Philips and Sony. The format was originally developed to store and play only sound recordings but was later adapted for storage of data (CD-ROM).

Digital Tape- 
Digital Audio Tape (DAT or R-DAT) is a signal recording and playback medium developed by Sony and introduced in 1987. As the name suggests, the recording is digital rather than analog. DAT has the ability to record at higher, equal or lower sampling rates than a CD at 16 bits quantisation.


Hard Disk-
hard disk drive is a data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rapidly rotating disks coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces

MiniDisc-
MiniDisc (MD) is a magneto-optical disc-based data storage format offering a capacity of 74 minutes and, later, 80 minutes, of digitised audio or 1 gigabyte of Hi-MD data. 

Sound File Formats-
MP3-
 is an audio coding format for digital audio which uses a form of lossy data compression, which are data encoding methods that use inexact approximations and partial data discarding to reduce file sizes significantly, typically by a factor of 10, in comparison with a CD, yet still sound like the original uncompressed audio to most listeners. Compared to CD quality digital audio, MP3 compression commonly achieves 75 to 95% reduction in size. 
WAV-
Waveform Audio File Format, more commonly known as WAV, is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs, It is the main format used on Windows systems for raw and typically uncompressed audio.

AAC - The Advanced Audio Coding format is based on the MPEG4 audio standard owned by Dolby. A copy-protected version of this format has been developed by Apple for use in music downloaded from their iTunes Music Store.

FLAC-
Developed by the Xiph.Org Foundation, the Free Lossless Audio Codec (FLAC) has much appeal due to its royalty-free licencing and open format. FLAC is both a compressed and lossless audio format, with file quality able to reach up to 32-bit / 96 kHz . FLAC enjoys the advantage of a reduced file size (about 30 to 40 percent smaller than the original data) without having to sacrifice audio quality, which makes it an ideal medium for digital archiving.

File Conversion-
File conversion is the process of converting a file into another type. For example changing a WAV file into MP3.

As-live Recordings- 
This is the process of recording a multitrack recording, but instead of recording the band individually the recording is of the Band playing live in a space where each instrument is isolated. Some bands may choose to do this because they may feel they lose the feel of the band if they are just recorded individually.

Live Recordings-
A process of recording live at a venue, to record the instruments the engineer would have to use a small mixing desk connected to the P.A outputs on the main desk. The tracks would be stored onto a hard drive where the engineer could then further mix at a later date. 

Interview Material:
To record an interview a range of mics can possibly be used, for example, If clip on microphones are used each mic will transmit back to a radio receiver which can then be hooked up to a small desk to be mixed and recorded. alternatively small stereo hand held recorder can be used, this will record straight on to device however may not have the same high quality.

Library Material- 
A sound Library is a platform where an artist can upload his work for other people to use, these can consist of sound effects which can be used for film, backing tracks which can be used for advertisement.

Recording Equipment:

Interfaces
Audio interfaces and video interfaces define physical parameters and interpretation of signals. For digital audio and digital video, this can be thought of as defining the physical layerdata link layer, and most or all of the application layer

Gain Stages
gain stage is a point during an audio signal flow that the engineer can make adjustments to the level, such as a fader on a mixing console or in a DAW. Gain staging is the process of managing the relative levels in a series of gain stages to prevent introduction of noise and distortion.

Mixer Inputs-
The channel input strips are usually a bank of identical monaural or stereo input channels. Each channel has rotary knobs, buttons and/or faders for controlling the gain and equalisation (e.g., bass and treble) of the signal on each channel. 

Audio signal flow-
Audio signal flow is the path an audio signal takes from source to output. The concept of audio signal flow is closely related to the concept of audio gain staging; each component in the signal flow can be thought of as a gain stage.

Sound signal integrity-
Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage waveform. However, digital signals are fundamentally analog in nature, and all signals are subject to effects such as noise, distortion, and loss. Over short distances and at low bit rates, a simple conductor can transmit this with sufficient fidelity.

Direct Injection-
A DI unit is an electronic device typically used in recording studios and in sound reinforcement systems to connect a high-impedance, line level, unbalanced output signal to a low-impedance, microphone level, balanced input, usually via an XLR connector and cable. DIs are frequently used to connect an electric guitar or electric bass to a mixing console's microphone input jack.

Multi-track-
Multitrack recording (MTR)—also known as multi-tracking, double tracking, or tracking—is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multi-tracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed.

Stereo recording-
Stereo recording is a technique involving the use of two microphones to simultaneously record one instrument. The mono signals from each microphone are assigned to the left and right channels of a stereo track to create a sense of width in the recording.

Analog recording-
Analog recording is a technique used for the recording of analog signals which, among many possibilities, allows analog audio and analog video for later playback. 

Digital Recording-
In a digital recording system, sound is stored and manipulated as a stream of discrete numbers, each number representing the air pressure at a particular time. The numbers are generated by a microphone connected to a circuit 

Non-linear
A non-linear editing system is a video or audio editing (NLAE) digital audio workstation (DAW) system that performs non-destructive editing on source material. 

CD
Compact disc is a digital optical disc data storage format released in 1982 and co-developed by Philips and Sony. The format was originally developed to store and play only sound recordings but was later adapted for storage of data.


DVD
DVD is a digital optical disc storage format invented and developed by PanasonicPhilipsSony and Toshiba in 1995. The medium can store any kind of digital data and is widely used for software and other computer files as well as video programs watched using DVD players. DVDs offer higher storage capacity than compact discs while having the same dimensions.

Audio Capture:

Studio: To capture audio in a television studio, clip on ties can be used to mic up separate people, alternatively a Boom mic can be placed just out of shot, this can advantageous as only one microphone is needed instead of multiple so individual.

Outside Broadcast-
A range of different methods can be used to capture an outside broadcast , for example clip on Microphones,  a Boom Mic with a dead cat out of shot, or alternatively Hand held Microphones.

Interviews-
To record an interview a Mic with figure or eight polar pattern could be used, alternatively clip on Mics or two separate microphones could also be used. Depending on the amount of people being interviewed it could also be beneficial to have one handheld mic which the interviewer can pass round.

Atmosphere- 
capturing atmosphere can be recorded by using a good quality microphone which can be configured to have an omnidirectional polar pattern, this means that the recording will give a good sense of its surroundings.

Monologue-
Recording a monologue can be done by recording with high quality condenser microphone in a sound proofed space such as vocal booth in studio, this will ensure a high quality recording which will not have any unwanted sounds. 

Group Debate-
Group debates can be recorded by micing up the individual people with clip on Microphones, however an omnidirectional microphone can be placed in the middle of the debate because it will be able to capture everyone around the microphone.

Audience Interaction-
To record members of the audience boom microphones can be used to record individual members of the audience without having to attach microphones audience interaction. Alternatively to record the crowd as a whole multiple omnidirectional microphones can be used to pick up the atmosphere and reaction of the entire crowd.





Friday, 7 October 2016

Sound and PIcture



Sound and Picture

Film

A film is normally created with a large budget and has a longer run time than many other moving image forms, it typically includes multiple camera angles rather than just one. It also contains a lot of foley sound and non diegetic sound. They are created by filming a series of takes, then selecting the best and editing them together in post production. Films are created for entertainment and occasionally for documentaries. More often than not Films are produced because of the huge profits they can produce.

Television
Television is also created with normally a big budget. The length of a television programme rarely exceeds 1 hour. It will typically includes multiple camera angles rather than just one. It also contains a lot of foley sound and non diegetic sound. Similar to Films TV programmes are created as entertainment and documentaries.


Web

Web videos are normally relatively short, only featuring one camera angle and therefore 1 take.There is very little non diegetic sound or foley as typically, a web video will have little to no budget. Web videos are normally informative or are made for entertainment. They are also rarely made for profit.


Hand-held                                                                                                                                 
Moving images are created by using a hand held device such as mobile or small camera. the purpose of this type of moving image is to entertain as normally it is uploaded on to the internet. Similar to Web videos, Hand Held are normally informative or are made for entertainment. They are also rarely made for profit.        

Animation 
There are many types of animation such as hand drawn, stop motion and CGI. The film will be edited with music and voice overs using computer software.



 Audio Components
  • Studio and location - A studio is a location desired for optimal sound recording as it lacks background noise due to soundproofing. This works perfectly for productions like news, as it allows for the sound to be heard more clearly, but won't be useful for productions where background ambience is required. On location refers to an outside area, that would not be typically used for  a sound recording. This would be useful for capturing ambience, but may mean that post production is required in order to hear the dialogue more clearly .
  • Interviews - these will usually be mic'ed up with small microphones attached to both the interviewer and the interviewee. The recorded audio will then be added to any video footage later in production.
  • Presentation -  A presenter will either have a wireless handheld mic used for live events or if the presentation is for a TV broadcast it would most likely be a small clip on mic that transmits to a sound engineer.     
  • Voiceover -Voiceovers will be recorded separately in a studio after the filming has finished. The actor recording the voiceover will have the film visible so they can time the speech at the precise time.
  • Drama dialogue - for a drama production, multiple voices will need to be captured.A production of a drama will therefore use large, handheld mics on poles known as Boom mics. These are held above the shot and will capture the sound of multiple people.
  •  Sound effects (SFX) - These are small background details helping to enhance the mood and feel of a film. These are recorded in a process known as Foley.
  • Stationary and moving sound sources - A stationary sound source refers to something being recorded that is not moving, whilst a moving sound source is. As a moving sound source moves closer then further away from you, its pitch and volume will increase and decrease.
  •             


Combing audio and visuals
  • Diegetic - Sound where the source is visible on screen, from characters speaking or from props. 
  • Non-diegetic - Sound from outside the environment of the film,for example, the music soundtrack.
  • External diegetic - Sound put into the shot at a later date but can be heard by characters, for example background noise, this is produced by foley.
  • Mood - Soundtracks will often be used to portray a certain mood that fits with the footage being shown.
  • Meaning - This involves adding deep emotional quality to a scene by using the music chosen to express this emotion.
  • Illusion - This involves adding  background noises from objects that are not necessarily on screen to create an atmosphere to give the viewer a sense that they at a specific location.