智能作曲

智能作曲​

    • 现成工具:AIVA、伯牙智能作曲
    • 流行制造者:悦耳如音乐,精准如数学
      • 生成音乐

 


公共场合的背景音乐,或者一些语音节目的背景音乐,它们并不一定要包含深刻的思想、动人的情绪。

它们有时候是功能性的,比如用舒缓的节奏引导人们情绪,不要在过于拥挤的环境里焦躁起来,或者是为了掩盖一些语音中的瑕疵和回音而出现的伴奏曲。

所以很多时候,只要有音乐就行,而且并不需要有多么突出。

但是在注重版权的地方,哪怕是一首很普通的背景音也要10美元一首。如果有了人工智能,这些费用几乎为零。

 


现成工具:AIVA、伯牙智能作曲

伯牙智能作曲,主打(中国)民族音乐,去各个地方收集民族音乐,再使用 RNN 训练新的民族音乐,具体请搜索微信小程序:伯牙智能作曲。

智能作曲_第1张图片

还有一些著名的人工智能作曲平台,如 AIVA。

智能作曲_第2张图片
只要邮箱注册了以后,就有几个模块式的选择。

您可以选择交响乐、赛博朋克风格、爵士风格、钢琴曲、摇滚乐、流行乐、另类等等。

如果打开进阶选项,还可以看到更详细的搭配菜单,比如使用C调啊,还是G调啊,用 4/4 拍啊,还是用 3/4 拍啊,节奏是 80 啊,还是 120 啊,20 种乐器里你希望出现哪几种啊,自动生成的歌儿长度是多少秒啊,要不要自动生成打击乐的部分啊,照此选项一次随机生成多少首歌儿啊……之类的一大堆选项。

全部选好以后,一点生成,等上几分钟,就可以在自己的页面上看到这几首曲子了。
 


流行制造者:悦耳如音乐,精准如数学

数学并不直接产生美感,美感却往往隐藏着数学。

流行歌曲看起来灵活自由,其实有非常严格的节奏要求,要有数学般精确的节奏感。

这个节奏,来自于重复,是从小处到大处不同层次上都在重复。

比如《在水一方》

绿草苍苍 白雾茫茫

有位佳人 在水一方

我愿逆流而上 依偎在她身旁

无奈前有险滩 道路又远又长

我愿顺流而下 找寻她的方向

却见依稀仿佛 她在水的中央

绿草萋萋 白雾迷离

有位佳人 靠水而居

我愿逆流而上 与她轻言细语

无奈前有险滩 道路曲折无已

我愿顺流而下 找寻她的踪迹

却见仿佛依稀 她在水中伫立

绿草苍苍 白雾茫茫

有位佳人 在水一方

第一遍主歌,总共四句:“绿草苍苍,白雾茫茫,有位佳人,在水一方。”

前面两句的旋律就有点相似,歌词对仗工整。

紧接着是第二遍主歌,还是四句,歌词变了但是旋律跟第一遍主歌完全相同:“绿草萋萋,白雾迷离,有位佳人,靠水而居。”

而后进入副歌,总共八句话,前四句和后四句几乎是重复的。唱完副歌,再重复一遍主歌,而后再唱一遍副歌。

乐句重复,主歌重复,副歌重复,主歌副歌编排重复,你要喜欢还会把整首歌重复播放很多遍 —— 从小处到大处不同层次上都在重复。

这种重复要严格到什么程度呢?

前副歌的第一个音阶,要跟主歌的第一个音阶一样才好 。

这么重复,就能产生一个“余音绕梁”的效果,旋律会一直盘踞在你的脑海里。

重复能带来期待感。就好像你问一个问题,对方回答之后又问了问题,问题又带来答案和问题,这么一唱一和地呼应。

同时我们也知道重复太多就会厌烦,那么重复到什么程度是恰到好处呢?

科学家用一个单一的音调,比如说B,刺激老鼠,老鼠第一次听到感觉很害怕,听了几次之后就习惯了,科学家就得再换一个音调,比如说C,继续刺激。

多次实验以后,科学家竟然总结了一个用最少的音调刺激老鼠的公式。

如果你只用B、C、D三个音调,想要最长时间地刺激老鼠,这个公式就是:

■ BBBBC - BBBC - BBC - BC - D

首先尽可能地重复,重复到一定程度加入新的东西,再重复,最后加上一个更新的东西。

加入一个新音调C,可以让听众暂时忘记前面已经重复好几次的B,这样播过了C你又可以再次播放B。

这其实很符合人的一个矛盾心理 —— 我们一方面喜欢熟悉的东西,一方面又不愿意承认我们喜欢熟悉的东西,我们自我感觉总是在追求新东西。

所以重复能给人带来愉悦感,但一旦被人识破是重复,就会引起不满。

最好的办法,就是在不被发现的情况下尽可能地重复,在即将被识破之前换新的。

事实上,老鼠喜欢的这个公式的后半段,BBC - BC - D,正好符合流行歌曲最普遍形式:

■主歌 - 主歌 - 副歌 - 主歌 - 副歌 - 最后是个有所变化的结尾

你想想最近听的流行歌曲是不是很像这个结构,先用改词不改调的方式唱两遍主歌,而后一遍副歌,再唱一遍主歌……

恰到好处的重复,就有余音绕梁的效果。音乐,真是记忆力的糖衣炮弹啊。

……

音乐中的重复可远不止这些~

⒈ 旋律决定和弦,和弦产生旋律。

像人来作曲时,人一般是先想出两三句旋律,而后找出他们所属的和弦的规律,接着用这组和弦进行作曲。四度转位最具有倾向性,二度六度次之。

比如怒放的生命主歌的和弦:

■Bm D Em Bm
■Bm D Em A

每四个就大体是一次周期。也就是说,旋律的相似性,是由和弦的相似性决定的。

⒉ 重复中求变化,变化里带重复。
在流行歌曲里,有编曲的变奏和主旋律的变奏两个方面。

编曲的变奏,可以是副旋律的变奏,也可以是乐器的变换,或者二者组合。

比如一首歌的副旋律(前奏、间奏等)边分别依次由笛子配弦乐、节奏感很强的手鼓配琵琶、猛劲架子鼓配电吉他solo,但都是一个副旋律。

主旋律的变奏,就像是很多歌手唱歌唱到最后的高潮部分,副歌真是激昂处,突然将一句旋律变换,但是始音和終音一般都会落在原本的和弦的根音上,新奇又不是和谐。虽是变化,但终回落到原来的和弦上,将这组和弦继续重复下去。

⒊ 串烧

正是由于很多歌都共同运用了一个和弦转位公式(如II-IV-II-VI…),这些歌都能够串烧起来,像广场舞两大神曲合并《最炫小苹果》,背后都是相同的和弦转位套路,于是就可以插起来唱。

观众觉得这很新奇很有乐趣,其实背后还是和弦的不断重复。

以上都是音乐的一些浅显的基本知识,但这些都是与重复有关,可想而知,更高深的音乐理论更离不开重复的根基。

悦耳如音乐,精准如数学。

重复、重复、再重复,一直重复到听众快要发现为止。
 


生成音乐

生成音乐其实和生成名字诗歌都差不多,都是使用RNN来学习名字的规律诗歌的规律音乐的规律。

经过训练后,模型就掌握了相关规律,然后在利用这个掌握了某种规律的模型来进行采样。

那么采样生成的作品里面也会体现出那套规律来——名字的规律诗歌的规律音乐的规律。

但是声音在计算机中的表现方式要复杂一些,所以我们要借助于很多工具库。

项目代码:

from __future__ import print_function
import IPython
import sys
from music21 import *
import numpy as np
from grammar import *
from qa import *
from preprocess import * 
from music_utils import *
from data_utils import *
from keras.models import load_model, Model
from keras.layers import Dense, Activation, Dropout, Input, LSTM, Reshape, Lambda, RepeatVector
from keras.initializers import glorot_uniform

from keras.utils import to_categorical

from keras.optimizers import Adam
from keras import backend as K

IPython.display.Audio('./data/30s_seq.mp3')

X, Y, n_values, indices_values = load_music_utils()
print('shape of X:', X.shape)
print('number of training examples:', X.shape[0])
print('Tx (length of sequence):', X.shape[1])
print('total # of unique values:', n_values)
print('Shape of Y:', Y.shape)

n_a = 64 

reshapor = Reshape((1, 78))                     
LSTM_cell = LSTM(n_a, return_state = True)       
densor = Dense(n_values, activation='softmax')    

# 实现模型
def djmodel(Tx, n_a, n_values):
    X = Input(shape=(Tx, n_values))
    a0 = Input(shape=(n_a,), name='a0')
    c0 = Input(shape=(n_a,), name='c0')
    a = a0
    c = c0
    
    outputs = []

    for t in range(Tx):
        x = Lambda(lambda x: X[:,t,:])(X)
        x = reshapor(x)
        a, _, c = LSTM_cell(x, initial_state=[a, c])
        out = densor(a)
        outputs.append(out)
        
    model = Model(inputs=[X, a0, c0], outputs=outputs)
    
    return model

results, indices = predict_and_sample(inference_model, x_initializer, a_initializer, c_initializer)
print("np.argmax(results[12]) =", np.argmax(results[12]))
print("np.argmax(results[17]) =", np.argmax(results[17]))
print("list(indices[12:18]) =", list(indices[12:18]))

out_stream = generate_music(inference_model)

生成的是 midi 格式的音乐文件,文件名为 my_music.midi,就在本文档同目录的 output 文件下。您可以使用播放器直接播放它,当然有些播放器可能不支持,安利potplayer播放器,或格式工厂等软件将 midi 文件转成 mp3 文件。

各类工具库:

# qa.py
from itertools import zip_longest
import random
from music21 import *

def __roundDown(num, mult):
    return (float(num) - (float(num) % mult))

def __roundUp(num, mult):
    return __roundDown(num, mult) + mult

def __roundUpDown(num, mult, upDown):
    if upDown < 0:
        return __roundDown(num, mult)
    else:
        return __roundUp(num, mult)

def __grouper(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

def prune_grammar(curr_grammar):
    pruned_grammar = curr_grammar.split(' ')

    for ix, gram in enumerate(pruned_grammar):
        terms = gram.split(',')
        terms[1] = str(__roundUpDown(float(terms[1]), 0.250, 
            random.choice([-1, 1])))
        pruned_grammar[ix] = ','.join(terms)
    pruned_grammar = ' '.join(pruned_grammar)

    return pruned_grammar

def prune_notes(curr_notes):
    for n1, n2 in __grouper(curr_notes, n=2):
        if n2 == None: 
            continue
        if isinstance(n1, note.Note) and isinstance(n2, note.Note):
            if n1.nameWithOctave == n2.nameWithOctave:
                curr_notes.remove(n2)

    return curr_notes

def clean_up_notes(curr_notes):
    removeIxs = []
    for ix, m in enumerate(curr_notes):
        if (m.quarterLength == 0.0):
            m.quarterLength = 0.250
        
        if (ix < (len(curr_notes) - 1)):
            if (m.offset == curr_notes[ix + 1].offset and
                isinstance(curr_notes[ix + 1], note.Note)):
                removeIxs.append((ix + 1))
    curr_notes = [i for ix, i in enumerate(curr_notes) if ix not in removeIxs]

    return curr_notes
# preprocess.py
from __future__ import print_function

from music21 import *
from collections import defaultdict, OrderedDict
from itertools import groupby, zip_longest

from grammar import *

from grammar import parse_melody
from music_utils import *

def __parse_midi(data_fn):
    midi_data = converter.parse(data_fn)
    melody_stream = midi_data[5]     
    melody1, melody2 = melody_stream.getElementsByClass(stream.Voice)
    for j in melody2:
        melody1.insert(j.offset, j)
    melody_voice = melody1

    for i in melody_voice:
        if i.quarterLength == 0.0:
            i.quarterLength = 0.25

    melody_voice.insert(0, instrument.ElectricGuitar())
    melody_voice.insert(0, key.KeySignature(sharps=1))

    partIndices = [0, 1, 6, 7]
    comp_stream = stream.Voice()
    comp_stream.append([j.flat for i, j in enumerate(midi_data) 
        if i in partIndices])

    full_stream = stream.Voice()
    for i in range(len(comp_stream)):
        full_stream.append(comp_stream[i])
    full_stream.append(melody_voice)

    solo_stream = stream.Voice()
    for part in full_stream:
        curr_part = stream.Part()
        curr_part.append(part.getElementsByClass(instrument.Instrument))
        curr_part.append(part.getElementsByClass(tempo.MetronomeMark))
        curr_part.append(part.getElementsByClass(key.KeySignature))
        curr_part.append(part.getElementsByClass(meter.TimeSignature))
        curr_part.append(part.getElementsByOffset(476, 548, 
                                                  includeEndBoundary=True))
        cp = curr_part.flat
        solo_stream.insert(cp)

    melody_stream = solo_stream[-1]
    measures = OrderedDict()
    offsetTuples = [(int(n.offset / 4), n) for n in melody_stream]
    measureNum = 0 
    for key_x, group in groupby(offsetTuples, lambda x: x[0]):
        measures[measureNum] = [n[1] for n in group]
        measureNum += 1

    chordStream = solo_stream[0]
    chordStream.removeByClass(note.Rest)
    chordStream.removeByClass(note.Note)
    offsetTuples_chords = [(int(n.offset / 4), n) for n in chordStream]

    chords = OrderedDict()
    measureNum = 0
    for key_x, group in groupby(offsetTuples_chords, lambda x: x[0]):
        chords[measureNum] = [n[1] for n in group]
        measureNum += 1

    del chords[len(chords) - 1]
    assert len(chords) == len(measures)

    return measures, chords

def __get_abstract_grammars(measures, chords):
    # extract grammars
    abstract_grammars = []
    for ix in range(1, len(measures)):
        m = stream.Voice()
        for i in measures[ix]:
            m.insert(i.offset, i)
        c = stream.Voice()
        for j in chords[ix]:
            c.insert(j.offset, j)
        parsed = parse_melody(m, c)
        abstract_grammars.append(parsed)

    return abstract_grammars

def get_musical_data(data_fn):
    
    measures, chords = __parse_midi(data_fn)
    abstract_grammars = __get_abstract_grammars(measures, chords)

    return chords, abstract_grammars

def get_corpus_data(abstract_grammars):
    corpus = [x for sublist in abstract_grammars for x in sublist.split(' ')]
    values = set(corpus)
    val_indices = dict((v, i) for i, v in enumerate(values))
    indices_val = dict((i, v) for i, v in enumerate(values))

    return corpus, values, val_indices, indices_val
# music_utils.py
from __future__ import print_function
import tensorflow as tf
import keras.backend as K
from keras.layers import RepeatVector
import sys
from music21 import *
import numpy as np
from grammar import *
from preprocess import *
from qa import *


def data_processing(corpus, values_indices, m = 60, Tx = 30):
    Tx = Tx 
    N_values = len(set(corpus))
    np.random.seed(0)
    X = np.zeros((m, Tx, N_values), dtype=np.bool)
    Y = np.zeros((m, Tx, N_values), dtype=np.bool)
    for i in range(m):
        random_idx = np.random.choice(len(corpus) - Tx)
        corp_data = corpus[random_idx:(random_idx + Tx)]
        for j in range(Tx):
            idx = values_indices[corp_data[j]]
            if j != 0:
                X[i, j, idx] = 1
                Y[i, j-1, idx] = 1
    
    Y = np.swapaxes(Y,0,1)
    Y = Y.tolist()
    return np.asarray(X), np.asarray(Y), N_values 

def next_value_processing(model, next_value, x, predict_and_sample, indices_values, abstract_grammars, duration, max_tries = 1000, temperature = 0.5):
    if (duration < 0.00001):
        tries = 0
        while (next_value.split(',')[0] == 'R' or 
            len(next_value.split(',')) != 2):
            if tries >= max_tries:
                rand = np.random.randint(0, len(abstract_grammars))
                next_value = abstract_grammars[rand].split(' ')[0]
            else:
                next_value = predict_and_sample(model, x, indices_values, temperature)

            tries += 1
            
    return next_value


def sequence_to_matrix(sequence, values_indices):
    sequence_len = len(sequence)
    x = np.zeros((1, sequence_len, len(values_indices)))
    for t, value in enumerate(sequence):
        if (not value in values_indices): print(value)
        x[0, t, values_indices[value]] = 1.
    return x

def one_hot(x):
    x = K.argmax(x)
    x = tf.one_hot(x, 78) 
    x = RepeatVector(1)(x)
    return x
# midi.py
from __future__ import print_function
import tensorflow as tf
import keras.backend as K
from keras.layers import RepeatVector
import sys
from music21 import *
import numpy as np
from grammar import *
from preprocess import *
from qa import *


def data_processing(corpus, values_indices, m = 60, Tx = 30):
    Tx = Tx 
    N_values = len(set(corpus))
    np.random.seed(0)
    X = np.zeros((m, Tx, N_values), dtype=np.bool)
    Y = np.zeros((m, Tx, N_values), dtype=np.bool)
    for i in range(m):
        random_idx = np.random.choice(len(corpus) - Tx)
        corp_data = corpus[random_idx:(random_idx + Tx)]
        for j in range(Tx):
            idx = values_indices[corp_data[j]]
            if j != 0:
                X[i, j, idx] = 1
                Y[i, j-1, idx] = 1
    
    Y = np.swapaxes(Y,0,1)
    Y = Y.tolist()
    return np.asarray(X), np.asarray(Y), N_values 

def next_value_processing(model, next_value, x, predict_and_sample, indices_values, abstract_grammars, duration, max_tries = 1000, temperature = 0.5):
    if (duration < 0.00001):
        tries = 0
        while (next_value.split(',')[0] == 'R' or 
            len(next_value.split(',')) != 2):
            if tries >= max_tries:
                rand = np.random.randint(0, len(abstract_grammars))
                next_value = abstract_grammars[rand].split(' ')[0]
            else:
                next_value = predict_and_sample(model, x, indices_values, temperature)

            tries += 1
            
    return next_value


def sequence_to_matrix(sequence, values_indices):
    sequence_len = len(sequence)
    x = np.zeros((1, sequence_len, len(values_indices)))
    for t, value in enumerate(sequence):
        if (not value in values_indices): print(value)
        x[0, t, values_indices[value]] = 1.
    return x

def one_hot(x):
    x = K.argmax(x)
    x = tf.one_hot(x, 78) 
    x = RepeatVector(1)(x)
    return x
# inference_code.py
def inference_model(LSTM_cell, densor, n_x = 78, n_a = 64, Ty = 100):
    x0 = Input(shape=(1, n_x))
    
    a0 = Input(shape=(n_a,), name='a0')
    c0 = Input(shape=(n_a,), name='c0')
    a = a0
    c = c0
    x = x0

    outputs = []
    for t in range(Ty):
        a, _, c = LSTM_cell(x, initial_state=[a, c])
        out = densor(a)
        outputs.append(out)
        x = RepeatVector(1)(out)
        
    inference_model = Model(inputs=[x0, a0, c0], outputs=outputs)
    return inference_model

inference_model = inference_model(LSTM_cell, densor)

x1 = np.zeros((1, 1, 78))
x1[:,:,35] = 1
a1 = np.zeros((1, n_a))
c1 = np.zeros((1, n_a))
predicting = inference_model.predict([x1, a1, c1])
indices = np.argmax(predicting, axis = -1)
results = to_categorical(indices, num_classes=78)
# grammar.py
from collections import OrderedDict, defaultdict
from itertools import groupby
from music21 import *
import copy, random, pdb

def __is_scale_tone(chord, note):
    scaleType = scale.DorianScale() 
    if chord.quality == 'major':
        scaleType = scale.MajorScale()
    scales = scaleType.derive(chord) 
    allPitches = list(set([pitch for pitch in scales.getPitches()]))
    allNoteNames = [i.name for i in allPitches] # octaves don't matter

    noteName = note.name
    return (noteName in allNoteNames)

def __is_approach_tone(chord, note):
    for chordPitch in chord.pitches:
        stepUp = chordPitch.transpose(1)
        stepDown = chordPitch.transpose(-1)
        if (note.name == stepDown.name or 
            note.name == stepDown.getEnharmonic().name or
            note.name == stepUp.name or
            note.name == stepUp.getEnharmonic().name):
                return True
    return False

def __is_chord_tone(lastChord, note):
    return (note.name in (p.name for p in lastChord.pitches))

def __generate_chord_tone(lastChord):
    lastChordNoteNames = [p.nameWithOctave for p in lastChord.pitches]
    return note.Note(random.choice(lastChordNoteNames))

def __generate_scale_tone(lastChord):
    scaleType = scale.WeightedHexatonicBlues() 
    if lastChord.quality == 'major':
        scaleType = scale.MajorScale()

    scales = scaleType.derive(lastChord) 
    allPitches = list(set([pitch for pitch in scales.getPitches()]))
    allNoteNames = [i.name for i in allPitches] 
    
    sNoteName = random.choice(allNoteNames)
    lastChordSort = lastChord.sortAscending()
    sNoteOctave = random.choice([i.octave for i in lastChordSort.pitches])
    sNote = note.Note(("%s%s" % (sNoteName, sNoteOctave)))
    return sNote

def __generate_approach_tone(lastChord):
    sNote = __generate_scale_tone(lastChord)
    aNote = sNote.transpose(random.choice([1, -1]))
    return aNote

def __generate_arbitrary_tone(lastChord):
    return __generate_scale_tone(lastChord) 

def parse_melody(fullMeasureNotes, fullMeasureChords):
    measure = copy.deepcopy(fullMeasureNotes)
    chords = copy.deepcopy(fullMeasureChords)
    measure.removeByNotOfClass([note.Note, note.Rest])
    chords.removeByNotOfClass([chord.Chord])

    measureStartTime = measure[0].offset - (measure[0].offset % 4)
    measureStartOffset  = measure[0].offset - measureStartTime

    fullGrammar = ""
    prevNote = None 
    numNonRests = 0 
    for ix, nr in enumerate(measure):
        try: 
            lastChord = [n for n in chords if n.offset <= nr.offset][-1]
        except IndexError:
            chords[0].offset = measureStartTime
            lastChord = [n for n in chords if n.offset <= nr.offset][-1]

        elementType = ' '
        
        if isinstance(nr, note.Rest):
            elementType = 'R'
        elif nr.name in lastChord.pitchNames or isinstance(nr, chord.Chord):
            elementType = 'C'
        elif __is_scale_tone(lastChord, nr):
            elementType = 'S'
        elif __is_approach_tone(lastChord, nr):
            elementType = 'A'
        else:
            elementType = 'X'

        if (ix == (len(measure)-1)):
            diff = measureStartTime + 4.0 - nr.offset
        else:
            diff = measure[ix + 1].offset - nr.offset

        noteInfo = "%s,%.3f" % (elementType, nr.quarterLength) 
        intervalInfo = ""
        if isinstance(nr, note.Note):
            numNonRests += 1
            if numNonRests == 1:
                prevNote = nr
            else:
                noteDist = interval.Interval(noteStart=prevNote, noteEnd=nr)
                noteDistUpper = interval.add([noteDist, "m3"])
                noteDistLower = interval.subtract([noteDist, "m3"])
                intervalInfo = ",<%s,%s>" % (noteDistUpper.directedName, 
                    noteDistLower.directedName)
                prevNote = nr
                
        grammarTerm = noteInfo + intervalInfo 
        fullGrammar += (grammarTerm + " ")

    return fullGrammar.rstrip()

def unparse_grammar(m1_grammar, m1_chords):
    m1_elements = stream.Voice()
    currOffset = 0.0 
    prevElement = None
    for ix, grammarElement in enumerate(m1_grammar.split(' ')):
        terms = grammarElement.split(',')
        currOffset += float(terms[1]) 

        if terms[0] == 'R':
            rNote = note.Rest(quarterLength = float(terms[1]))
            m1_elements.insert(currOffset, rNote)
            continue

        try: 
            lastChord = [n for n in m1_chords if n.offset <= currOffset][-1]
        except IndexError:
            m1_chords[0].offset = 0.0
            lastChord = [n for n in m1_chords if n.offset <= currOffset][-1]

        if (len(terms) == 2): 
            insertNote = note.Note() 

            if terms[0] == 'C':
                insertNote = __generate_chord_tone(lastChord)

            elif terms[0] == 'S':
                insertNote = __generate_scale_tone(lastChord)

            else:
                insertNote = __generate_approach_tone(lastChord)

            insertNote.quarterLength = float(terms[1])
            if insertNote.octave < 4:
                insertNote.octave = 4
            m1_elements.insert(currOffset, insertNote)
            prevElement = insertNote

        else:
            interval1 = interval.Interval(terms[2].replace("<",''))
            interval2 = interval.Interval(terms[3].replace(">",''))
            if interval1.cents > interval2.cents:
                upperInterval, lowerInterval = interval1, interval2
            else:
                upperInterval, lowerInterval = interval2, interval1
            lowPitch = interval.transposePitch(prevElement.pitch, lowerInterval)
            highPitch = interval.transposePitch(prevElement.pitch, upperInterval)
            numNotes = int(highPitch.ps - lowPitch.ps + 1) # for range(s, e)

            if terms[0] == 'C':
                relevantChordTones = []
                for i in range(0, numNotes):
                    currNote = note.Note(lowPitch.transpose(i).simplifyEnharmonic())
                    if __is_chord_tone(lastChord, currNote):
                        relevantChordTones.append(currNote)
                if len(relevantChordTones) > 1:
                    insertNote = random.choice([i for i in relevantChordTones
                        if i.nameWithOctave != prevElement.nameWithOctave])
                elif len(relevantChordTones) == 1:
                    insertNote = relevantChordTones[0]
                else: 
                    insertNote = prevElement.transpose(random.choice([-2,2]))
                if insertNote.octave < 3:
                    insertNote.octave = 3
                insertNote.quarterLength = float(terms[1])
                m1_elements.insert(currOffset, insertNote)

            elif terms[0] == 'S':
                relevantScaleTones = []
                for i in range(0, numNotes):
                    currNote = note.Note(lowPitch.transpose(i).simplifyEnharmonic())
                    if __is_scale_tone(lastChord, currNote):
                        relevantScaleTones.append(currNote)
                if len(relevantScaleTones) > 1:
                    insertNote = random.choice([i for i in relevantScaleTones
                        if i.nameWithOctave != prevElement.nameWithOctave])
                elif len(relevantScaleTones) == 1:
                    insertNote = relevantScaleTones[0]
                else: 
                    insertNote = prevElement.transpose(random.choice([-2,2]))
                if insertNote.octave < 3:
                    insertNote.octave = 3
                insertNote.quarterLength = float(terms[1])
                m1_elements.insert(currOffset, insertNote)

            else:
                relevantApproachTones = []
                for i in range(0, numNotes):
                    currNote = note.Note(lowPitch.transpose(i).simplifyEnharmonic())
                    if __is_approach_tone(lastChord, currNote):
                        relevantApproachTones.append(currNote)
                if len(relevantApproachTones) > 1:
                    insertNote = random.choice([i for i in relevantApproachTones
                        if i.nameWithOctave != prevElement.nameWithOctave])
                elif len(relevantApproachTones) == 1:
                    insertNote = relevantApproachTones[0]
                else: 
                    insertNote = prevElement.transpose(random.choice([-2,2]))
                if insertNote.octave < 3:
                    insertNote.octave = 3
                insertNote.quarterLength = float(terms[1])
                m1_elements.insert(currOffset, insertNote)

            prevElement = insertNote
    return m1_elements    
# data_utils.py
from music_utils import * 
from preprocess import * 
from keras.utils import to_categorical

chords, abstract_grammars = get_musical_data('data/original_metheny.mid')
corpus, tones, tones_indices, indices_tones = get_corpus_data(abstract_grammars)
N_tones = len(set(corpus))
n_a = 64
x_initializer = np.zeros((1, 1, 78))
a_initializer = np.zeros((1, n_a))
c_initializer = np.zeros((1, n_a))

def load_music_utils():
    chords, abstract_grammars = get_musical_data('data/original_metheny.mid')
    corpus, tones, tones_indices, indices_tones = get_corpus_data(abstract_grammars)
    N_tones = len(set(corpus))
    X, Y, N_tones = data_processing(corpus, tones_indices, 60, 30)   
    return (X, Y, N_tones, indices_tones)


def generate_music(inference_model, corpus = corpus, abstract_grammars = abstract_grammars, tones = tones, tones_indices = tones_indices, indices_tones = indices_tones, T_y = 10, max_tries = 1000, diversity = 0.5): 
    out_stream = stream.Stream()
    curr_offset = 0.0                                     
    num_chords = int(len(chords) / 3)                     
    print("Predicting new values for different set of chords.")
    for i in range(1, num_chords):
        curr_chords = stream.Voice()
        for j in chords[i]:
            curr_chords.insert((j.offset % 4), j)
        _, indices = predict_and_sample(inference_model)
        indices = list(indices.squeeze())
        pred = [indices_tones[p] for p in indices]
        predicted_tones = 'C,0.25 '
        
        for k in range(len(pred) - 1):
            predicted_tones += pred[k] + ' ' 
        predicted_tones +=  pred[-1]
        predicted_tones = predicted_tones.replace(' A',' C').replace(' X',' C')
        predicted_tones = prune_grammar(predicted_tones)
        sounds = unparse_grammar(predicted_tones, curr_chords)
        sounds = prune_notes(sounds)
        sounds = clean_up_notes(sounds)
        print('Generated %s sounds using the predicted values for the set of chords ("%s") and after pruning' % (len([k for k in sounds if isinstance(k, note.Note)]), i))
        for m in sounds:
            out_stream.insert(curr_offset + m.offset, m)
        for mc in curr_chords:
            out_stream.insert(curr_offset + mc.offset, mc)
        curr_offset += 4.0
        
    out_stream.insert(0.0, tempo.MetronomeMark(number=130))

    mf = midi.translate.streamToMidiFile(out_stream)
    mf.open("output/my_music.midi", 'wb')
    mf.write()
    print("Your generated music is saved in output/my_music.midi")
    mf.close()    
    return out_stream


def predict_and_sample(inference_model, x_initializer = x_initializer, a_initializer = a_initializer, 
                       c_initializer = c_initializer):
    pred = inference_model.predict([x_initializer, a_initializer, c_initializer])
    indices = np.argmax(pred, axis = -1)
    results = to_categorical(indices, num_classes=78)
    
    return results, indices

你可能感兴趣的