First of all, congrats on the quality of the audio to midi algorithm. You are by far the market leader for years, and I’m grateful that I can play any instrument with my guitar. Thanks.
One thing where I wonder / don’t fully understand your development strategy: why spend the effort developing a nice graphical interface, and all these audio & midi effects ? In these fields, there are plenty of DAWs on the market that do this already.
For me personally, I’m using MG2 within a DAW (Ableton Live), and doing all the midi / audio routing in Live. It is extremely efficient for this, even for switching between patches (I can do it in sync with the tempo, and without any audio dropout, etc…). So I don’t really need this capability in MG3. Even if I need a specific MIDI effect (sustain, arpeggiator, …), I can use M4L to build it.
The most interesting point for me in MG3 (by far) is MPE tracking. And, when trying out the beta (3.0.52), the main pain point (again, for me) is performance. Where MG2 was using 4% CPU, MG3 now uses 15%.
So I just wanted to share this: what would help me most as a customer is improving the performance of the thing where you have the monopoly (audio->midi conversion) - even if this means having a very stripped down plugin, I don’t really mind.
Again, thanks for this amazing plugin, these are exciting times, thanks to you
I understand this sentiment and it comes up now and then, in different variations:
I’ve been working on MG tracking models full time since 2007. It’s difficult and often exhausting and requires long streaks of focus that put some constrains also on my family situation and all other commitments, such as customer support here.
After the release of MG2 I was mostly absent here for 7 years as I developed the MG3 MPE tracking model. Most of this research and experiments end up being discarded, and then I need to have a break, and often it helps my motivation to just make a new button or feature, to see it works and call it a day.
As for your questions,
MG standalone is way easier to approach for 90% of users (not necessarily those posting here). Standalone often perform a bit better than when used as a plugin in a DAW, because we can optimise more for this specific case where MG host the synths, and dont need to send MIDI out without any guarantee about how it’s received.
For most DAW use, just use the MINI patch which is also what is loaded as default? The additional features will never be loaded into memory or waste a single CPU cycle when not used.
For your CPU concern, make sure you update to 3.0.55 and you can enable “force multithreading” in settings. CPU performance is one of the places that can likely be optimised further, but I learned the hard way that premature optimisation is futile, when the problem target is so difficult.
One could also argue that the average computer gained more than x3 CPU over the last 7 years and now runs fanless. Where as MG2 runs on iPhone 5S, MG3 would be more in the iPhone 10 era. You can also still use the MG2 tracking in MG3 (polyphonic without bends).
Hey if you want, we could help each other in settung up MG3 for live:) @moussasfeir
And other stuff as well. Like through video conferencing or discord. Let me know ehat u think. And it goes for anyone else who are windows users;)
I understand the fact that it’s a bit simpler to use MG3 than a DAW (wouldn’t go as far as “way simpler”, but maybe I’m biased since I’m a long time DAW user). Also, there are quite cheap DAWs too (e.g. Reaper) if one doesn’t need the extra capabilities that Ableton Live offers.
Thanks for the multithreading advice.
@MegaJordon I don’t feel that I need help setting it up in Live, and I have limited time, so Discord / Video conf scares me a bit . But: feel free to ask if you’re stuck. Basically I’m creating an audio track, putting MG3 on it (with the “mini” preset). Then I add a midi track, set its input to AudioTrack/MidiGuitar3, and put a synth on it. Make sure both tracks have monitoring set to “in” and it should work
@JamO Thanks again, kudos again, and I hope you’ll have fun finalizing the development and will keep a healthy work-life balance
I am an Ableton Live user and an MG3/MG2 user.
I love both.
I have published my last album and here (end of the song) you can listen to MG3 with Triplecheese. Unwanted Move by Pasha | alonetone (at 2:56)
and here with Deep Expressor. Unwanted Move by Pasha | alonetone (at 2:43)
Blockquote
MG standalone is way easier to approach for 90% of users (not necessarily those posting here). Standalone often perform a bit better than when used as a plugin in a DAW, because we can optimise more for this specific case where MG host the synths, and dont need to send MIDI out without any guarantee about how it’s received.
Even if I have the Live Suite I cannot use Ableton internal synths with the same level of expression I have with my uhe plugins (including Zebra2). I think the previous statement of JamO says it all. You know what you send - you are not sure about what is received. … maybe that’s the reason. Analog (Live Synth) is MPE enabled but I cannot reach same level of expression. I end up using MG3 as a plugin with a Zebra2 instance inside which performs better than sending MIDI OUT to a Zebra2 track.
That’s my experience. YMMV.
This will not magically add information that is not there in the first place. By MPE tracking I mean the actual recognition of audio features and translation into something tangible (pitch, strike, pressure, brightness, slide, etc…).
Let me add that MG3 sends exactly the same midi stream to a synth loaded inside MG or to a MIDI OUT, or the VST MIDI OUT. It’s exactly the same bytes.
But with a DAW, and VST3 in particular, things get hairy:
MG3 send out the MIDI data but the VST3 format doesn’t support MPE directly so Steinberg translate that into their “note expressions”.
The DAW receive these “note expressions” and will translate that back into MPE. (At this time i’m not at all convinced we get back the exact original MPE data, which best represents your playing).
Now you would think that you need to just configure a synth in the DAW to receive MIDI MPE and some pitch bend range, but that wont do. You forgot that the DAW track itself, must also have been configured to receive MPE and an agreed upon pitch range in order to the the above translations, somewhere deep inside some settings.
Of course some DAWs are easier than others, and I assume over time DAW updates will smoothen out these workflows. I also believe MIDI 2.0 has some of these solutions in mind.
In MG3 and you get 6 continuous envelopes for pitch, 6 for pressure, etc… this information represents the small physical fluctuations in the string vibrations and it reasonably good description of the audio. 30kb/sec, roughly.
In MG2 and you get 6 chromatic notes: 48 bytes in total.
Every midi machine can of course add bends or notes or arpeggiate or whatever, but you will never get the MPE information back if you dont have them in the first place.
It is an MPE Max for Live Device. It creates true MPE data from standard midi data. You can create MPE curves for Pressure and Slide and randomize these features as well. It is very deep, powerful and customizable. It even works in conjunction with MPE controllers using its merge function. Lots of good ideas there for MG3.
But did you try to create curves directly in MG3? Use the MODULATORS module, for example the ADSR, wobble, strike and pressure envelopes. You can combine these envelopes in all sorts of ways (wire the modulation output to another modulation input parameter) and the beauty of it is that they are composed of the real signals from your guitar (strike, pressure, brightness, pitch), so its not just a static or random mapping.
I think the current development is right on track for my needs. I will use it for live performance and believe the stand alone MG3 Hex app will handle it most efficiently. I don’t need the recording capabilities of a full fledged DAW. Mainly I need the ability to run multiple VST instruments and good midi routing and midi machines. I think the current development is right on track for my needs. Especially if JamO can implement auto string tune and alt string tune in HEX.