How can I route MIDI output from MG3 in Reaper?

This is almost certainly a stupid newbie question, but whatever.

I’ve installed the beta version in Reaper as a plugin. I have an audio test track with some guitar notes and chords on it as test data. Mg3 seems to recognise this, and plays it on the demo synth just fine.

But for now I just want to get MIDI output that I can record on another Reaper track, so I can use other virtual instruments to play the MIDI. I’m sure this is trivially easy but I can’t immediately see how to get the plugin to actually emit MIDI output?

I’m quite familiar with Reaper track routing, by the way: I can get this to work with MG2.

Sorry to appear stupid… this is probably one of those things that is considered ‘so obvious’ that the documentation doesn’t even bother to mention it?

look for midi output in the midi machines group.

I know you’re trying to help, but this is a classic example of “things that are so obvious they don’t need to be documented’”.

What exactly is the ‘midi machines group’ and how do I get to it from the GUI that appears by default when I first insert MG3 as an FX in Reaper? Can you walk me through this step by step?
Assume I know nothing about the interface… (largely true)… :slight_smile:

sorry, i don’t have mg on this system otherwise i’d give you some screen shots.

if you click on an empty slot in a chain you will get a list of devices, instruments and effects. the midi machines are near the top of the list.

best to start from a new patch.

Nothing in here that helps?

Thanks, that finds it. I wouldn’t have called that an obviously intuitive path, mind you.
Would have thought that the default behavior should be to output MIDI, with the options of using added internal synths or VSTs as an add-on evolution.

Seems that they are evolving this into a sort of Swiss Army Knife that is almost a DAW or multisynth package in itself? Useful for live performance, perhaps, but I’m more interested in it just for the conversion: the accuracy and latency. Which is why I’m doing comparative tests.

please share the results of your latency tests. if you are trying to optimize latency when using your recorded file, you can run at sample rates which wouldn’t normally be usable if you were using a typical vst instrument, for instance 32 or 64 samples.

an app like mg3 necessarily becomes a ‘swiss army knife’. but it is equally at home in the studio or on the stage. also remember that this is still a beta product. features are in flux.

There will be documentation…

As @kimyo said — and it’s possibly one of the most frequently uttered phrases on this forum — this software is in beta.

That said, if you’re ever in doubt, the three factory presets (in regular tracker mode) lay out the basic functions of the software and all the included modules are listed in the menu that opens when you click on any given slot in a chain.

Certainly. A bit of background: I have had Migic installed for a year or two (I’m sure you have heard of it), and have tried using it on a few recording projects. Came to the conclusion that it is on the cusp of usability, but not quite. If it tracked just a bit faster and better it would be a useful tool.

It seems to be an orphan product now though (at least, they aren’t answering any emails), so in search of a better mousetrap I have installed trial versions of MG2 and MG3beta & I’m running some tests.

Gear info: test guitar is an Ibanez Roadstar, bridge humbucker pickup, tone and volume pots full up, recorded directly into the audio interface. DAW is Reaper, latest version 7.30.
I recorded an audio track of notes and chords which is used for all tests so the input data is consistent.
Then applied the various converters and routed the MIDI output in each case to a new track for recording.
Each plugin is as it comes ‘out of the box’: no adjustments at all.

So by expanding the horizontal scale to millisecond resolution I can compare the timing of the MIDI to the source audio. For latency testing I recorded single E notes in octaves from open bottom string up to 12th fret on top string. Of course there’s a degree of uncertainty here: from examining the audio waveform of the recorded guitar, it’s a bit of a judgement call to say exactly where the note ‘begins’.
Plus or minus a few milliseconds, I guess.

The results so far are… confusing. There is a LOT of variance.
I’m not sure what is going on yet. Either:

  1. I hit each of the notes at different levels of attack & that affects the conversion? Or,
  2. My entire methodology is flawed and Reaper doesn’t really display audio &/or MIDI timings to that degree of accuracy?

More tests to come. I may try a new recording with each note normalized to about the same level.
And of course, this is just technical testing so far (I admit it, I’m a systems software engineer in my day job)!
Can’t comment yet on subjective usability… I’ll probably create a test song over the next few days to experiment.

So all that said, here are some (confusing) measurements so far… two instances of each note:

Open Low E

Migic 24 24
MG2 poly 28 27
MG2 mono 22 21
Mg3 poly 22 29
Mg3 mono 25 27

Mid E (4th string 2nd fret)

Migic 19 13
MG2 poly 16 21
MG2 mono 16 11
Mg3 poly 22 21
Mg3 mono 19 21

Top E string

Migic 11 17
MG2 poly 17 12
MG2 mono 8 7
Mg3 poly 25 22
Mg3 mono 9 17

Top string 12th fret

Migic 30 9
MG2 poly 24 16
MG2 mono 19 11
Mg3 poly 24 22
Mg3 mono 27 22

thanks for taking the time to elaborate.

i had never heard of migic, makes me wonder how many teams are working on this tech. neural dsp has monophonic pitch to internal synth in their ‘archetype rabea’, which is very playable.

i don’t think pre-processing the signal (normalization) is the right way to go for this task. i don’t think higher attack will give faster detection, unless perhaps the guitar signal is too low to begin with.

i am prepared to be wrong.

very strange that top e 12 is so much slower across the board than open top e.

what sample rate are you using?

this page has some notes on latency testing which may be of interest:

I’m familiar with MIGIC but could never make friends with it when I tried it out in around 2016 (I think).

I’ve probably undergone some shift in how I think about P-2-M tech now compared to 20 years ago — I care far more about the sensation of playing than the somewhat arbitrary numbers I see when I pull up meters/data though there is surely some degree of correlation involved and setting things up optimally is critical to guarantee this comfort.

From where I’m sitting, it seems like there’s more guitar synthesis going around than ever before — but no SynthAxes!!! It seems to be revealing itself everywhere.

One perhaps interesting thing: I just discovered a few days ago that there’s a “guitar synth” module in the RNBO Guitar Pedal demo included in Max 8 (Cycling ‘74) though it’s quite limited. There’s even an FFT module in Max that could probably come in handy if you wanted to try to roll your own P-2-M tools.

I’ll try the experiment. As I said, the guitar was recorded directly to the audio interface with no processing, and I think the level was set at a fairly typical point: well short of clipping but a healthy signal. Pretty much what one would use for normal recording in a project.

Just the Reaper defaults: 48KHz, 24 bit.

I think the next experiment will be to use the same recorded guitar take, but to enable audio generation in the plugins: then record the output audio rather than output MIDI. Then compare the audio files.
Just wondering if there is something vaguely funky in Reaper about routing &/or recording MIDI?

Stay tuned…

Very true, in the final analysis it’s usable playability that matters.
I plan to run a few more technical tests and then I’ll create a song project and see how things work with that.

So here are the results.
Migic has a built-in piano sound, for MG2 I used the built-in electric piano, and for MG3beta just used the test synth that came out of the box.

Some conclusions. There is still a lot of variability, but there seems to be a pretty strong correlation between the MIDI output and audio output tests.

So I think:

  1. This rules out the hypothesis that there is something odd about MIDI routing &/or recording in Reaper.
  2. Seems to support the conclusion that there is something about the guitar notes in the test audio (attack or just overall level?) which is strongly influencing the conversion.

Here’s the data. Sorry about the format, the posting software eats whitespace and I haven’t figured out how to override that yet.

Open Low E

Migic 24 24 audio 28 34
MG2 poly 28 27 audio 28 26
MG2 mono 22 21 audio 23 22
Mg3 poly 22 29 audio 22 23
Mg3 mono 25 27 audio 23 23

Mid E (4th string 2nd fret)

Migic 19 13 audio 24 19
MG2 poly 16 21 audio 16 22
MG2 mono 16 11 audio 18 10
Mg3 poly 22 21 audio 22 16
Mg3 mono 19 21 audio 10 12

Top E string

Migic 11 17 audio 16 20
MG2 poly 17 12 audio 17 12
MG2 mono 8 7 audio 7 7
Mg3 poly 25 22 audio 22 18
Mg3 mono 9 17 audio 8 7

Top string 12th fret

Migic 30 9 audio 31 7
MG2 poly 24 16 audio 24 17
MG2 mono 19 11 audio 19 11
Mg3 poly 24 22 audio 19 18
Mg3 mono 27 22 audio 20 11

Next I may try a version of the test audio with each note item normalized to about the same level.
By the way, if any developer might be interested in the audio test file I’m using, I’m quite happy to provide it.
This was just something I casually knocked down in one take: I wasn’t making any effort to be consistent in volume or attack etc etc.

in terms of sample rate there are three sets of numbers: 44.1/48/96, 16/24/32, and 64/128/256/512.

the last has the greatest effect on latency. in this case ‘reaper defaults’ may not be fully representative. most users will fine tune their sample rate to their particular needs/configuration.

please do post the audio file you’re working from and i’ll spot check to see if i get similar results.

it is possible that reaper and other daws will have a very small and random latency variance depending on when the note hits in the processing interval. if you figure this might be 2 or 3 ms, and add in the other 2 or 3 ms margin of error in determining the note onset, that is enough to explain the smaller variances in the work you’ve done so far.

in this case, it may be that mg3 standalone delivers lower latency. in my setup, it certainly feels that way. i’m running standalone at 64 samples. i don’t load any synths into the standalone instance.

As far as I can see there are two basic buffer size parameters in Reaper.
First there is the ASIO driver (I’m on Win 11): the request block size is set to 64.
Then there’s the media buffer size, set by default to 1200.

I haven’t tried altering these for tests yet.

Also I just realized I did not go through the ‘select tracker’ step in MG3 settings to select guitar as instrument, so that might have some effect. I’ll repeat the MG3 tests with that done.

Will also run tests with a variant of the source file with each note normalized to the same level.

I am happy to provide a .wav of the test file. It’s about 2.7 MB, saved at 48K 24 bit.
Not sure how to ship it though: unfamiliar with this board software so I don’t know if there’s a way to attach things to posts?

Could be, but I don’t have the testgear to evaluate a standalone version.
And for my use case, since I’m recording and not playing out at the moment, I am most interested in the VST plugin versions…

Interim update: didn’t seem to change the results much.
Question: is the tracker actually tuned or tweaked differently for different instrument types?

Moving on to tests with each note normalized to a similar LUFS.

Next iteration: tests with each note split as a separate item in Reaper and all normalized to about the same level. Summary: didn’t seem to make a lot of difference to the latency, and the odd variances still remain.

Assuming Reaper’s amplification is reasonably linear, this seems to suggest that level as such is not a major factor in the conversion. So what remains? I can only think it has something to do with the ‘timbre’ or spectral composition of each particular note.

Of course, I don’t know how much Reaper’s display of the waveform can be trusted… need to find a reliable oscilloscope utility to check it…

I have some thoughts on this as an engineer and physicist regarding how we might improve guitar to MIDI conversion, but let me mull on that.

Here’s the data, again sorry about the format (whitespace etc).

Open Low E

Migic 24 24 audio 28 34 normalized 23 24
MG2 poly 28 27 audio 28 26 normalized 28 26
MG2 mono 22 21 audio 23 22 normalized 21 20
Mg3 poly 22 29 audio 22 23 normalized 23 24
Mg3 mono 25 27 audio 23 23 normalized 25 26

Mid E (4th string 2nd fret)

Migic 19 13 audio 24 19 normalized 10 10
MG2 poly 16 21 audio 16 22 normalized 15 20
MG2 mono 16 11 audio 18 10 normalized 15 10
Mg3 poly 22 21 audio 22 16 normalized 22 18
Mg3 mono 19 21 audio 10 12 normalized 19 21

Top E string

Migic 11 17 audio 16 20 normalized 8 18
MG2 poly 17 12 audio 17 12 normalized 17 11
MG2 mono 8 7 audio 7 7 normalized 8 7
Mg3 poly 25 22 audio 22 18 normalized 25 19
Mg3 mono 9 17 audio 8 7 normalized 8 17

Top string 12th fret

Migic 30 9 audio 31 7 normalized 16 7
MG2 poly 24 16 audio 24 17 normalized 24 16
MG2 mono 19 11 audio 19 11 normalized 19 10
Mg3 poly 24 22 audio 19 18 normalized 24 19
Mg3 mono 27 22 audio 20 11 normalized 24 22

1 Like

there is a 4mb limit on file uploads, so yours should be fine. there’s an up arrow icon in the message composition box.

migic definitely seems to prefer normalized audio.

here’s a collection of approaches, as you’ve been doing the author suggests that testing is better done with recorded files than live performance.

Development is always test-driven, offline. I’ve collected a test suite of samples that showcase various playing techniques, ranging from single notes to melodic lines and musical passages. I use these recordings to test new ideas and fine tune the algorithms. I seldom test live. Why? When testing live, we become part of the feedback loop. Performers often unconsciously adjust their playing to compensate for the detector’s shortcomings—something I aim to avoid. The goal is to create a system that adapts to the musician’s natural playing, not the other way around.

Interesting link, thanks. I had already observed that the guitar waveform is far from sinusoidal, at least if the Reaper wave display is to be trusted. (I want to find an oscilloscope utility to cross-check this).
So the fourier transform is likely to be rather complex, suggesting that using a purely FFT-based aproach is probably not optimal and that at least some time-domain analysis is desirable.

A fairly obvious thought, and clearly from the link people have already been working along those lines.

I’m attaching the audio file I used for my testing.
Will be interesting to see what your tests make of it.

I think I’m about done with technical tests for the moment.
Time to create a project and look into musical playability!

1 Like

i ran the audio thru mg3 standalone using polyphonic and monophonic modes at 64 samples.

i panned the dry guitar left and the synth right (i used the fm synth).

i recorded using melda mrecorder on the master chain. i just did one pass.

i did the calc by cutting the clip at the beginning of the dry guitar, and then again at the beginning of the synth, and measuring the length.

the files are attached, i had to convert to mp3 to get them to upload.

in general i’d say my poly results are worse than yours, and mono is better.

poly mono
31 25
27 16
23 6
28 19
25 12
23 11
26 22

monophonic:

polyphonic: