|
|
Created:
5 years, 5 months ago by mgraczyk Modified:
5 years, 5 months ago CC:
webrtc-reviews_webrtc.org, interface-changes_webrtc.org, rwolff_gocast.it, yujie_mao (webrtc), Andrew MacDonald, hlundin-webrtc, tterriberry_mozilla.com, qiang.lu, niklas.enbom, kwiberg-webrtc, aluebs-webrtc, bjornv1, ekm Base URL:
https://chromium.googlesource.com/external/webrtc.git@master Target Ref:
refs/pending/heads/master Project:
webrtc Visibility:
Public. |
DescriptionAllow more than 2 input channels in AudioProcessing.
The number of output channels is constrained to be equal to either 1 or the
number of input channels.
R=aluebs@webrtc.org, andrew@webrtc.org, pbos@webrtc.org
Committed: https://chromium.googlesource.com/external/webrtc/+/c204754b7a0cc801c70e8ce6c689f57f6ce00b3b
Patch Set 1 #Patch Set 2 : Fix build issues on mac #Patch Set 3 : Fix CopyTo and CopyFrom in AudioBuffer #Patch Set 4 : Fix build problem on MSCV #Patch Set 5 : Fix comments and rearrange code #
Total comments: 6
Patch Set 6 : Fix mac build #
Total comments: 99
Patch Set 7 : Address Comments #
Total comments: 5
Patch Set 8 : Address more comments #Patch Set 9 : Reupload #
Total comments: 3
Patch Set 10 : Remove special deinterleave cases and fix sample rate check #Patch Set 11 : Change ProcessStream interface #
Total comments: 20
Patch Set 12 : Address some comments #Patch Set 13 : Fix docs #Messages
Total messages: 37 (9 generated)
This patch enables multichannel (>2) input audio streams. We can now beamform arbitrarily large microphone arrays. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/common_audio/wav_f... File webrtc/common_audio/wav_file.cc (left): https://codereview.webrtc.org/1226093007/diff/80001/webrtc/common_audio/wav_f... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, This check fails when the number of channels does not divide the buffet size. It doesn't seem correct because we may have written a chunk that splits a frame without writing all samples. The only thing that changes here is num_samples_ and the check happens on Close. If an invalid number of total samples have been written, we will catch it there. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/audio_buffer.cc:410: DownmixInterleavedToMono(frame->data_, deinterleaved[0], input_num_frames_, This function works for any number of input channels. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/audio_processing_impl.cc (left): https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/audio_processing_impl.cc:649: transient_suppressor_->Suppress(ca->channels_f()[0], This is just my clang_format disagreeing with yours. Should not be changed. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/audio_processing_impl.cc:1074: #if defined(WEBRTC_ARCH_BIG_ENDIAN) More clang_format here and above. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/audio_processing_impl.h (left): https://codereview.webrtc.org/1226093007/diff/80001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/audio_processing_impl.h:50: class AudioRate { I replaced AudioRate and AudioFormat with the externally visible StreamConfig class.
mgraczyk@chromium.org changed reviewers: + aluebs@webrtc.org
Added aluebs as a reviewer.
Thank you for doing this! It is awesome! :) https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... File webrtc/common_audio/wav_file.cc (left): https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, Why do you need to remove this? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:26: int KeyboardChannelIndex(const StreamConfig& stream_config) { You could check inside here if stream_config.has_keayboard() and else return -1. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:38: void DownmixInterleavedToMono(const T* interleaved, Is this specialization used anywhere? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:41: int num_channels) { Inputs before outputs? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:42: return DownmixInterleavedToMonoImpl<T, T>( This return isn't needed, right? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:46: template <> Why is this template needed? Just curious. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:51: return DownmixInterleavedToMonoImpl<int16_t, int32_t>( This return isn't needed, right? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:335: DownmixStereoToMono<int16_t, int32_t>( Shouldn't this support multiple channels? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:33: template <typename T, typename Intermediate> What do you think about adding these to audio_util.h? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:34: void DownmixStereoToMono(int num_frames, Is it worth it to specialize for the stereo case? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:37: const T* right) { Inputs before outputs? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:47: int num_channels) { Inputs before outputs? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:68: int num_channels) { Inputs before outputs? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:77: num_channels * num_multichannel_frames * sizeof(*deinterleaved)); You don't need to multiply by num_channels since you know it is 1 in this case. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:78: } else if (num_channels == 2) { Is it worth it to specialize for the stereo case? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:86: while (interleaved < end) { I find for with indexes easier to follow, but I leave it up to you. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:279: int AudioProcessingImpl::Initialize(int input_sample_rate_hz, Will this interface be replaced by one with a ProcessingConfig? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:312: render_audio_.reset(nullptr); When do we want this case? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { <= https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:351: if (stream.num_channels() < 0) { < 1 https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:367: (static_cast<size_t>(config.input_stream().num_channels()) != You can use num_in_channels here. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:530: sizeof(float) * processing_config.input_stream().samples_per_channel(); How about using api_format_.input_stream().samples_per_channel() to be consistent with next line? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:544: sizeof(float) * processing_config.input_stream().samples_per_channel(); How about using api_format_.input_stream().samples_per_channel() to be consistent with next line? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:574: ProcessingConfig processing_config = api_format_; To be consistent, how about creating the ProcessingConfig like: const ProcessingConfig processing_config = { ... } https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:703: if (reverse_config.num_channels() <= 0) { Why is this needed? Is this because you don't check it in the initialization? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:711: if (reverse_config.samples_per_channel() != Move this if statement to the API which receives sample_per_channel and assert directly here, as you did in the ProcessStream interface. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:749: if (frame->num_channels_ <= 0) { Why is this needed? Is this because you don't check it in the initialization? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:320: virtual int ProcessStream(const float* const* src, The idea is to migrate to this interface and get rid of the other one? Because adding support to yet another interface is exactly the opposite of what we want. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, Here you are passing in a ReverseStream as well. You could probably just drop the ProcessingConfig and pass in 2 StreamConfigs instead. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:350: virtual int AnalyzeReverseStream(const float* const* data, Same comment about adding yet another interface. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:491: return sample_rate_hz * (AudioProcessing::kChunkSizeMs / 1000.0); Why do you need 1000.0 if you cast it to int anyway? Also, why do you need the parenthesis? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:500: struct ProcessingConfig { Why not a class? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/modules... File webrtc/modules/modules.gyp (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/modules... webrtc/modules/modules.gyp:173: 'audio_processing/audio_buffer_unittest.cc', Alphabetical order.
PTAL https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... File webrtc/common_audio/wav_file.cc (left): https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Why do you need to remove this? My comments were left behind on a previous patch. I'll paste them here and in the other places: This check fails when the number of channels does not divide the buffet size. It doesn't seem correct because we may have written a chunk that splits a frame without writing all samples. The only thing that changes here is num_samples_ and the check happens on Close. If an invalid number of total samples have been written, we will catch it there. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:26: int KeyboardChannelIndex(const StreamConfig& stream_config) { On 2015/07/14 23:12:42, aluebs-webrtc wrote: > You could check inside here if stream_config.has_keayboard() and else return -1. Done, although the only callsite is conditional on has_keyboard being true. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:38: void DownmixInterleavedToMono(const T* interleaved, On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Is this specialization used anywhere? This isn't the specialization, it's a normal template definition. I could static_assert false here instead of providing default behavior, but this is the behavior we'd want for floats. What should I do? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:41: int num_channels) { On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Inputs before outputs? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:42: return DownmixInterleavedToMonoImpl<T, T>( On 2015/07/14 23:12:42, aluebs-webrtc wrote: > This return isn't needed, right? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:46: template <> On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Why is this template needed? Just curious. I could write a function called "DownmixInterleavedInt16ToMono" instead. I like this because it allows us to worry about the buffer data types here instead of at all the possible call sites. If we want to use int32 or int8 instead, we just add another overload here. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:51: return DownmixInterleavedToMonoImpl<int16_t, int32_t>( On 2015/07/14 23:12:42, aluebs-webrtc wrote: > This return isn't needed, right? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:335: DownmixStereoToMono<int16_t, int32_t>( On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Shouldn't this support multiple channels? Done. Does this work? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:406: if (num_proc_channels_ == 1) { This function works for any number of input channels. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:33: template <typename T, typename Intermediate> On 2015/07/14 23:12:43, aluebs-webrtc wrote: > What do you think about adding these to audio_util.h? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:34: void DownmixStereoToMono(int num_frames, On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Is it worth it to specialize for the stereo case? Probably, although I didn't benchmark it. I figured the stereo case would be extremely common. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:37: const T* right) { On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Inputs before outputs? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:47: int num_channels) { On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Inputs before outputs? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:68: int num_channels) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Inputs before outputs? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:77: num_channels * num_multichannel_frames * sizeof(*deinterleaved)); On 2015/07/14 23:12:42, aluebs-webrtc wrote: > You don't need to multiply by num_channels since you know it is 1 in this case. Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:78: } else if (num_channels == 2) { On 2015/07/14 23:12:42, aluebs-webrtc wrote: > Is it worth it to specialize for the stereo case? Same answer, although here the savings is not going to be as much. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:86: while (interleaved < end) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > I find for with indexes easier to follow, but I leave it up to you. Acknowledged. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:279: int AudioProcessingImpl::Initialize(int input_sample_rate_hz, On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Will this interface be replaced by one with a ProcessingConfig? Yes, eventually. It'll be easiest to first commit this, then swap clients over to the other Initialize() overload, then remove this overload in 3 separate CLs. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:312: render_audio_.reset(nullptr); On 2015/07/14 23:12:43, aluebs-webrtc wrote: > When do we want this case? This is just saying "When there is no reverse stream, don't bother allocating a buffer for it". https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > <= The reverse stream can have zero sampling rate and zero channels here. I check that the reverse channel is not zero in AnalyzeReverseStream. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:351: if (stream.num_channels() < 0) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > < 1 See above https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:367: (static_cast<size_t>(config.input_stream().num_channels()) != On 2015/07/14 23:12:43, aluebs-webrtc wrote: > You can use num_in_channels here. Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:530: sizeof(float) * processing_config.input_stream().samples_per_channel(); On 2015/07/14 23:12:43, aluebs-webrtc wrote: > How about using api_format_.input_stream().samples_per_channel() to be > consistent with next line? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:544: sizeof(float) * processing_config.input_stream().samples_per_channel(); On 2015/07/14 23:12:43, aluebs-webrtc wrote: > How about using api_format_.input_stream().samples_per_channel() to be > consistent with next line? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:574: ProcessingConfig processing_config = api_format_; On 2015/07/14 23:12:43, aluebs-webrtc wrote: > To be consistent, how about creating the ProcessingConfig like: > const ProcessingConfig processing_config = { ... } I did this to be defensive against the possibility of new fields in ProcessingConfig in the future. We want to copy api_format_, only changing the sampling rate and channel count. If we add new fields, those should be copied as well rather than set to default values (which is what the aggregate initialization syntax would result in). WDYT? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:703: if (reverse_config.num_channels() <= 0) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Why is this needed? Is this because you don't check it in the initialization? Yes. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:711: if (reverse_config.samples_per_channel() != On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Move this if statement to the API which receives sample_per_channel and assert > directly here, as you did in the ProcessStream interface. Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:749: if (frame->num_channels_ <= 0) { On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Why is this needed? Is this because you don't check it in the initialization? yes, same thing. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:320: virtual int ProcessStream(const float* const* src, On 2015/07/14 23:12:44, aluebs-webrtc wrote: > The idea is to migrate to this interface and get rid of the other one? Because > adding support to yet another interface is exactly the opposite of what we want. Yes, I am hoping that is the end result. With a "config" struct and sensible defaults, it will be easier to change this interface without changing client code. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/14 23:12:43, aluebs-webrtc wrote: > Here you are passing in a ReverseStream as well. You could probably just drop > the ProcessingConfig and pass in 2 StreamConfigs instead. That's true, but this makes the interface more consistent and easier to change. I'll do whatever you want here. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:350: virtual int AnalyzeReverseStream(const float* const* data, On 2015/07/14 23:12:44, aluebs-webrtc wrote: > Same comment about adding yet another interface. Acknowledged. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:491: return sample_rate_hz * (AudioProcessing::kChunkSizeMs / 1000.0); I guess if all our sampling rates are divisible by 100, it is okay to do the integer division. When I wrote this I was mistakenly thinking of sample_rate_hz_ as a float. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:500: struct ProcessingConfig { On 2015/07/14 23:12:44, aluebs-webrtc wrote: > Why not a class? Done. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/modules... File webrtc/modules/modules.gyp (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/modules... webrtc/modules/modules.gyp:173: 'audio_processing/audio_buffer_unittest.cc', On 2015/07/14 23:12:44, aluebs-webrtc wrote: > Alphabetical order. File removed
ajm@chromium.org changed reviewers: + ajm@chromium.org
Just glanced at it so far; looks good in general. Will try to have a closer look soon. https://codereview.chromium.org/1226093007/diff/120001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.chromium.org/1226093007/diff/120001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:305: virtual int ProcessStream(const float* const* src, Can you add a TODO here and on the deprecated AnalyzeReverseStream to remove when clients are updated? https://codereview.chromium.org/1226093007/diff/120001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:467: samples_per_channel_(calculate_samples_per_channel(sample_rate_hz)) {} Alex, Michael: I know it breaks convention in this module, but what do you think about using num_frames_ to represent this quantity? We've been switching elsewhere.
https://codereview.chromium.org/1226093007/diff/120001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.chromium.org/1226093007/diff/120001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:467: samples_per_channel_(calculate_samples_per_channel(sample_rate_hz)) {} On 2015/07/15 05:21:20, ajm wrote: > Alex, Michael: I know it breaks convention in this module, but what do you think > about using num_frames_ to represent this quantity? We've been switching > elsewhere. It is unfortunate that we used "frames" to refer to blocks/chunks in other places, so I find the term "frames" really confusing. But now that we have a convention, I think we should stick with it for all new code. In short, I agree it is probably for the best to use num_frames_.
https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... File webrtc/common_audio/wav_file.cc (left): https://codereview.chromium.org/1226093007/diff/100001/webrtc/common_audio/wa... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, On 2015/07/15 01:12:45, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Why do you need to remove this? > > My comments were left behind on a previous patch. I'll paste them here and in > the other places: > > This check fails when the number of channels does not divide the buffet size. It > doesn't seem correct because we may have written a chunk that splits a frame > without writing all samples. The only thing that changes here is num_samples_ > and the check happens on Close. If an invalid number of total samples have been > written, we will catch it there. But do we want to write a frame without all samples? I mean, why should that happen? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:38: void DownmixInterleavedToMono(const T* interleaved, On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Is this specialization used anywhere? > > This isn't the specialization, it's a normal template definition. I could > static_assert false here instead of providing default behavior, but this is the > behavior we'd want for floats. What should I do? I meant, I don't think we have any interleaved float cases, right? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:46: template <> On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Why is this template needed? Just curious. > > I could write a function called "DownmixInterleavedInt16ToMono" instead. I like > this because it allows us to worry about the buffer data types here instead of > at all the possible call sites. If we want to use int32 or int8 instead, we just > add another overload here. Thanks for the explanation! Makes sense :) https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.cc:335: DownmixStereoToMono<int16_t, int32_t>( On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Shouldn't this support multiple channels? > > Done. Does this work? You don't need to do this manually, audio_buffer/channel_buffer does it for you. Just use split_channels_const(kBand0To8kHz) instead of split_bands_const(). https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_buffer.h:34: void DownmixStereoToMono(int num_frames, On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Is it worth it to specialize for the stereo case? > > Probably, although I didn't benchmark it. I figured the stereo case would be > extremely common. Yes, common, but do you gain anything? It definitively adds an extra if statement to all cases. Plus more code to maintain. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:279: int AudioProcessingImpl::Initialize(int input_sample_rate_hz, On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > Will this interface be replaced by one with a ProcessingConfig? > > Yes, eventually. It'll be easiest to first commit this, then swap clients over > to the other Initialize() overload, then remove this overload in 3 separate CLs. Agreed. Just making sure that that is the plan. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:312: render_audio_.reset(nullptr); On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > When do we want this case? > > This is just saying "When there is no reverse stream, don't bother allocating a > buffer for it". Ack. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > <= > > The reverse stream can have zero sampling rate and zero channels here. I check > that the reverse channel is not zero in AnalyzeReverseStream. But here it looks like all streams can (although you check the num_channels_ below). https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:574: ProcessingConfig processing_config = api_format_; On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > To be consistent, how about creating the ProcessingConfig like: > > const ProcessingConfig processing_config = { ... } > > I did this to be defensive against the possibility of new fields in > ProcessingConfig in the future. We want to copy api_format_, only changing the > sampling rate and channel count. If we add new fields, those should be copied > as well rather than set to default values (which is what the aggregate > initialization syntax would result in). > > WDYT? Good point! But then shouldn't you do the same in the other cases where you copy it? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/audio_processing_impl.cc:703: if (reverse_config.num_channels() <= 0) { On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > Why is this needed? Is this because you don't check it in the initialization? > > Yes. Do you think it is worth it to add these to be able to not allocate the reverse stream for some test cases (in all real applications, there is a reverse stream)? https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:320: virtual int ProcessStream(const float* const* src, On 2015/07/15 01:12:47, mgraczyk wrote: > On 2015/07/14 23:12:44, aluebs-webrtc wrote: > > The idea is to migrate to this interface and get rid of the other one? Because > > adding support to yet another interface is exactly the opposite of what we > want. > > Yes, I am hoping that is the end result. With a "config" struct and sensible > defaults, it will be easier to change this interface without changing client > code. Agreed. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/15 01:12:47, mgraczyk wrote: > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > Here you are passing in a ReverseStream as well. You could probably just drop > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > That's true, but this makes the interface more consistent and easier to change. > > I'll do whatever you want here. I personally prefer to have 2 StreamConfigs to be clear to the user which parameters are actually going to be used and to be consistent with the reverse stream. https://codereview.chromium.org/1226093007/diff/100001/webrtc/modules/audio_p... webrtc/modules/audio_processing/include/audio_processing.h:491: return sample_rate_hz * (AudioProcessing::kChunkSizeMs / 1000.0); On 2015/07/15 01:12:47, mgraczyk wrote: > I guess if all our sampling rates are divisible by 100, it is okay to do the > integer division. When I wrote this I was mistakenly thinking of sample_rate_hz_ > as a float. If our sample rates are not divisible by 100 we have a bigger problem, because a chunk would not contain an integer number of samples.
https://codereview.webrtc.org/1226093007/diff/100001/webrtc/common_audio/wav_... File webrtc/common_audio/wav_file.cc (left): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/common_audio/wav_... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, On 2015/07/15 18:04:05, aluebs-webrtc wrote: > On 2015/07/15 01:12:45, mgraczyk wrote: > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > Why do you need to remove this? > > > > My comments were left behind on a previous patch. I'll paste them here and in > > the other places: > > > > This check fails when the number of channels does not divide the buffet size. > It > > doesn't seem correct because we may have written a chunk that splits a frame > > without writing all samples. The only thing that changes here is num_samples_ > > and the check happens on Close. If an invalid number of total samples have > been > > written, we will catch it there. > > But do we want to write a frame without all samples? I mean, why should that > happen? The WriteSamples(float*...) passes partial frames because it uses a buffer size that isn't necessarily a multiple of the number of channels. The CHECK is overly restrictive. We want to be able to write partial frames because there could be some buffering between the whatever module generates the audio and this WAV writer. The CHECK places restrictions on the buffer's output but does not give us any additional safety on the output of the audio generator because the buffer could pass along invalid partial frames as whole frames anyway. If we want to make sure we're generating whole frames, we should be doing that somewhere else. The WAV writer just needs to check self-consistency, which means it should do the check when it writes to the file. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.cc:38: void DownmixInterleavedToMono(const T* interleaved, On 2015/07/15 18:04:05, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > Is this specialization used anywhere? > > > > This isn't the specialization, it's a normal template definition. I could > > static_assert false here instead of providing default behavior, but this is > the > > behavior we'd want for floats. What should I do? > > I meant, I don't think we have any interleaved float cases, right? That's true. I removed the definition so now the function gets linked in for int16_t input. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.cc:335: DownmixStereoToMono<int16_t, int32_t>( On 2015/07/15 18:04:05, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > Shouldn't this support multiple channels? > > > > Done. Does this work? > > You don't need to do this manually, audio_buffer/channel_buffer does it for you. > Just use split_channels_const(kBand0To8kHz) instead of split_bands_const(). Nice, Done. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.h:34: void DownmixStereoToMono(int num_frames, On 2015/07/15 18:04:05, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > Is it worth it to specialize for the stereo case? > > > > Probably, although I didn't benchmark it. I figured the stereo case would be > > extremely common. > > Yes, common, but do you gain anything? It definitively adds an extra if > statement to all cases. Plus more code to maintain. The if statement is basically free because it's at the beginning of the function and will be consistently true or false, so calls to the function will either be icache misses (in which case the branch penalty will not matter) or icache hits (in which case the branch will be correctly predicted and incur no cost). I agree it's probably not worth the extra code though. Removed. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { On 2015/07/15 18:04:06, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > <= > > > > The reverse stream can have zero sampling rate and zero channels here. I > check > > that the reverse channel is not zero in AnalyzeReverseStream. > > But here it looks like all streams can (although you check the num_channels_ > below). Yeah num_in_channels is checked to be nonzero below, and the output has to have 1 channel or the same (nonzero) number of channels as the input. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:574: ProcessingConfig processing_config = api_format_; On 2015/07/15 18:04:06, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > To be consistent, how about creating the ProcessingConfig like: > > > const ProcessingConfig processing_config = { ... } > > > > I did this to be defensive against the possibility of new fields in > > ProcessingConfig in the future. We want to copy api_format_, only changing > the > > sampling rate and channel count. If we add new fields, those should be copied > > as well rather than set to default values (which is what the aggregate > > initialization syntax would result in). > > > > WDYT? > > Good point! But then shouldn't you do the same in the other cases where you copy > it? True, Done. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:703: if (reverse_config.num_channels() <= 0) { On 2015/07/15 18:04:06, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > Why is this needed? Is this because you don't check it in the > initialization? > > > > Yes. > > Do you think it is worth it to add these to be able to not allocate the reverse > stream for some test cases (in all real applications, there is a reverse > stream)? What about clients that only use APM for it's one way processing? Noise suppression, beamforming, AGC, etc? https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/15 18:04:06, aluebs-webrtc wrote: > On 2015/07/15 01:12:47, mgraczyk wrote: > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > Here you are passing in a ReverseStream as well. You could probably just > drop > > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > > > That's true, but this makes the interface more consistent and easier to > change. > > > > I'll do whatever you want here. > > I personally prefer to have 2 StreamConfigs to be clear to the user which > parameters are actually going to be used and to be consistent with the reverse > stream. They are all used. If the reverse stream configuration changes here, then APM will be reinitialized with a different reverse stream. The point is that if any part of the config changes, the whole thing gets wiped and reinitialized. That's how it was before, except it was less obvious because the caller didn't pass in information about the reverse stream. It seems like passing the reverse config here makes that more clear, not less. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:491: return sample_rate_hz * (AudioProcessing::kChunkSizeMs / 1000.0); On 2015/07/15 18:04:06, aluebs-webrtc wrote: > On 2015/07/15 01:12:47, mgraczyk wrote: > > I guess if all our sampling rates are divisible by 100, it is okay to do the > > integer division. When I wrote this I was mistakenly thinking of > sample_rate_hz_ > > as a float. > > If our sample rates are not divisible by 100 we have a bigger problem, because a > chunk would not contain an integer number of samples. Yeah, that makes sense. https://codereview.webrtc.org/1226093007/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:305: virtual int ProcessStream(const float* const* src, On 2015/07/15 05:21:20, ajm wrote: > Can you add a TODO here and on the deprecated AnalyzeReverseStream to remove > when clients are updated? Done. https://codereview.webrtc.org/1226093007/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:467: samples_per_channel_(calculate_samples_per_channel(sample_rate_hz)) {} On 2015/07/15 16:42:17, aluebs-webrtc wrote: > On 2015/07/15 05:21:20, ajm wrote: > > Alex, Michael: I know it breaks convention in this module, but what do you > think > > about using num_frames_ to represent this quantity? We've been switching > > elsewhere. > > It is unfortunate that we used "frames" to refer to blocks/chunks in other > places, so I find the term "frames" really confusing. But now that we have a > convention, I think we should stick with it for all new code. In short, I agree > it is probably for the best to use num_frames_. Done, although I think there are plenty of better names. "frames_per_chunk_", "frames_per_channel_", "chunk_size_frames_", or what we have now would all be more clear.
https://codereview.webrtc.org/1226093007/diff/100001/webrtc/common_audio/wav_... File webrtc/common_audio/wav_file.cc (left): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/common_audio/wav_... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, On 2015/07/15 20:03:19, mgraczyk wrote: > On 2015/07/15 18:04:05, aluebs-webrtc wrote: > > On 2015/07/15 01:12:45, mgraczyk wrote: > > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > > Why do you need to remove this? > > > > > > My comments were left behind on a previous patch. I'll paste them here and > in > > > the other places: > > > > > > This check fails when the number of channels does not divide the buffet > size. > > It > > > doesn't seem correct because we may have written a chunk that splits a frame > > > without writing all samples. The only thing that changes here is > num_samples_ > > > and the check happens on Close. If an invalid number of total samples have > > been > > > written, we will catch it there. > > > > But do we want to write a frame without all samples? I mean, why should that > > happen? > > The WriteSamples(float*...) passes partial frames because it uses a buffer size > that isn't necessarily a multiple of the number of channels. > > The CHECK is overly restrictive. We want to be able to write partial frames > because there could be some buffering between the whatever module generates the > audio and this WAV writer. The CHECK places restrictions on the buffer's output > but does not give us any additional safety on the output of the audio generator > because the buffer could pass along invalid partial frames as whole frames > anyway. If we want to make sure we're generating whole frames, we should be > doing that somewhere else. The WAV writer just needs to check self-consistency, > which means it should do the check when it writes to the file. Ok, agreed. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.h:78: } else if (num_channels == 2) { On 2015/07/15 01:12:46, mgraczyk wrote: > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > Is it worth it to specialize for the stereo case? > > Same answer, although here the savings is not going to be as much. So, should this be removed as well following the same logic as above? How about the 1 channel case? https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { On 2015/07/15 20:03:19, mgraczyk wrote: > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > On 2015/07/15 01:12:46, mgraczyk wrote: > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > <= > > > > > > The reverse stream can have zero sampling rate and zero channels here. I > > check > > > that the reverse channel is not zero in AnalyzeReverseStream. > > > > But here it looks like all streams can (although you check the num_channels_ > > below). > > Yeah num_in_channels is checked to be nonzero below, and the output has to have > 1 channel or the same (nonzero) number of channels as the input. Yes, but what about the sample rate? https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:703: if (reverse_config.num_channels() <= 0) { On 2015/07/15 20:03:19, mgraczyk wrote: > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > On 2015/07/15 01:12:46, mgraczyk wrote: > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > Why is this needed? Is this because you don't check it in the > > initialization? > > > > > > Yes. > > > > Do you think it is worth it to add these to be able to not allocate the > reverse > > stream for some test cases (in all real applications, there is a reverse > > stream)? > > What about clients that only use APM for it's one way processing? Noise > suppression, beamforming, AGC, etc? Ok, I was not thinking of any 1 way clients, but you are right. Agreed on having this. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/15 20:03:20, mgraczyk wrote: > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > On 2015/07/15 01:12:47, mgraczyk wrote: > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > Here you are passing in a ReverseStream as well. You could probably just > > drop > > > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > > > > > That's true, but this makes the interface more consistent and easier to > > change. > > > > > > I'll do whatever you want here. > > > > I personally prefer to have 2 StreamConfigs to be clear to the user which > > parameters are actually going to be used and to be consistent with the reverse > > stream. > > They are all used. If the reverse stream configuration changes here, then APM > will be reinitialized with a different reverse stream. > > The point is that if any part of the config changes, the whole thing gets wiped > and reinitialized. That's how it was before, except it was less obvious > because the caller didn't pass in information about the reverse stream. It seems > like passing the reverse config here makes that more clear, not less. But before you could only change the reverse stream when calling the reverse stream and it would be reinitialized then. I find that clearer, than having to specify (or use defaults) for a stream I am not calling at the moment. https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... File webrtc/common_audio/include/audio_util.h (right): https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:155: T* deinterleaved); I don't see what this declaration (with no definition) is doing.
https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.h:78: } else if (num_channels == 2) { On 2015/07/15 21:29:16, aluebs-webrtc wrote: > On 2015/07/15 01:12:46, mgraczyk wrote: > > On 2015/07/14 23:12:42, aluebs-webrtc wrote: > > > Is it worth it to specialize for the stereo case? > > > > Same answer, although here the savings is not going to be as much. > > So, should this be removed as well following the same logic as above? How about > the 1 channel case? Done. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_processing_impl.cc (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_processing_impl.cc:348: if (stream.sample_rate_hz() < 0) { On 2015/07/15 21:29:17, aluebs-webrtc wrote: > On 2015/07/15 20:03:19, mgraczyk wrote: > > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > > On 2015/07/15 01:12:46, mgraczyk wrote: > > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > > <= > > > > > > > > The reverse stream can have zero sampling rate and zero channels here. I > > > check > > > > that the reverse channel is not zero in AnalyzeReverseStream. > > > > > > But here it looks like all streams can (although you check the num_channels_ > > > below). > > > > Yeah num_in_channels is checked to be nonzero below, and the output has to > have > > 1 channel or the same (nonzero) number of channels as the input. > > Yes, but what about the sample rate? I changed the check so that sample_rate only matters if num_channels is positive. https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/15 21:29:17, aluebs-webrtc wrote: > On 2015/07/15 20:03:20, mgraczyk wrote: > > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > > On 2015/07/15 01:12:47, mgraczyk wrote: > > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > > Here you are passing in a ReverseStream as well. You could probably just > > > drop > > > > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > > > > > > > That's true, but this makes the interface more consistent and easier to > > > change. > > > > > > > > I'll do whatever you want here. > > > > > > I personally prefer to have 2 StreamConfigs to be clear to the user which > > > parameters are actually going to be used and to be consistent with the > reverse > > > stream. > > > > They are all used. If the reverse stream configuration changes here, then APM > > will be reinitialized with a different reverse stream. > > > > The point is that if any part of the config changes, the whole thing gets > wiped > > and reinitialized. That's how it was before, except it was less obvious > > because the caller didn't pass in information about the reverse stream. It > seems > > like passing the reverse config here makes that more clear, not less. > > But before you could only change the reverse stream when calling the reverse > stream and it would be reinitialized then. I find that clearer, than having to > specify (or use defaults) for a stream I am not calling at the moment. Calls to ProcessStream didn't change the reverse stream config, but they did reset its buffers and clear the current state when the forward stream config changed. Previously, the old reverse config was reused implicitly. Isn't it better to require clients to explicitly specify a new config when the reverse stream is reset rather than just using the old one? Clients will not have to specify new config or use defaults for the reverse stream. They will use the same ProcessingConfig object they used to create APM on calls to ProcessStream and AnalyzeReverseStream. I think calling code will look like: apm->Initialize(my_config_); ... apm->ProcessStream(my_input_, my_config_, my_output); ... apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); Compare that to apm->Initialize(my_config_); ... apm->ProcessStream(my_input_, my_config_->input_stream(), my_config_->output_stream(), my_output); ... apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); I'll change it if you really want me to. https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... File webrtc/common_audio/include/audio_util.h (right): https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:155: T* deinterleaved); On 2015/07/15 21:29:17, aluebs-webrtc wrote: > I don't see what this declaration (with no definition) is doing. This is the template declaration for DownmixInterleavedToMono. The declaration below declares the int16_t specialization.
lgtm % a comment https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/15 21:53:56, mgraczyk wrote: > On 2015/07/15 21:29:17, aluebs-webrtc wrote: > > On 2015/07/15 20:03:20, mgraczyk wrote: > > > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > > > On 2015/07/15 01:12:47, mgraczyk wrote: > > > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > > > Here you are passing in a ReverseStream as well. You could probably > just > > > > drop > > > > > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > > > > > > > > > That's true, but this makes the interface more consistent and easier to > > > > change. > > > > > > > > > > I'll do whatever you want here. > > > > > > > > I personally prefer to have 2 StreamConfigs to be clear to the user which > > > > parameters are actually going to be used and to be consistent with the > > reverse > > > > stream. > > > > > > They are all used. If the reverse stream configuration changes here, then > APM > > > will be reinitialized with a different reverse stream. > > > > > > The point is that if any part of the config changes, the whole thing gets > > wiped > > > and reinitialized. That's how it was before, except it was less obvious > > > because the caller didn't pass in information about the reverse stream. It > > seems > > > like passing the reverse config here makes that more clear, not less. > > > > But before you could only change the reverse stream when calling the reverse > > stream and it would be reinitialized then. I find that clearer, than having to > > specify (or use defaults) for a stream I am not calling at the moment. > > Calls to ProcessStream didn't change the reverse stream config, but they did > reset its buffers and clear the current state when the forward stream config > changed. Previously, the old reverse config was reused implicitly. Isn't it > better to require clients to explicitly specify a new config when the reverse > stream is reset rather than just using the old one? > > Clients will not have to specify new config or use defaults for the reverse > stream. They will use the same ProcessingConfig object they used to create APM > on calls to ProcessStream and AnalyzeReverseStream. I think calling code will > look like: > > > apm->Initialize(my_config_); > ... > apm->ProcessStream(my_input_, my_config_, my_output); > ... > apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); > > > Compare that to > apm->Initialize(my_config_); > ... > > apm->ProcessStream(my_input_, my_config_->input_stream(), > my_config_->output_stream(), my_output); > ... > apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); > > > I'll change it if you really want me to. Personally I find the last one more consistent (ProcessStream and AnalyzeReverseStream are the same) and clear. If you know you are going to reset it, then you can always call Initialize again with the whole config, else you only pass in what is needed. For example, an application that has no reverse stream never has to create a config (not even the default one) for it. https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... File webrtc/common_audio/include/audio_util.h (right): https://codereview.webrtc.org/1226093007/diff/160001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:155: T* deinterleaved); On 2015/07/15 21:53:56, mgraczyk wrote: > On 2015/07/15 21:29:17, aluebs-webrtc wrote: > > I don't see what this declaration (with no definition) is doing. > > This is the template declaration for DownmixInterleavedToMono. The declaration > below declares the int16_t specialization. Oh, you are right. Thanks for clarifying!
https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:321: const ProcessingConfig& processing_config, On 2015/07/16 00:20:52, aluebs-webrtc wrote: > On 2015/07/15 21:53:56, mgraczyk wrote: > > On 2015/07/15 21:29:17, aluebs-webrtc wrote: > > > On 2015/07/15 20:03:20, mgraczyk wrote: > > > > On 2015/07/15 18:04:06, aluebs-webrtc wrote: > > > > > On 2015/07/15 01:12:47, mgraczyk wrote: > > > > > > On 2015/07/14 23:12:43, aluebs-webrtc wrote: > > > > > > > Here you are passing in a ReverseStream as well. You could probably > > just > > > > > drop > > > > > > > the ProcessingConfig and pass in 2 StreamConfigs instead. > > > > > > > > > > > > That's true, but this makes the interface more consistent and easier > to > > > > > change. > > > > > > > > > > > > I'll do whatever you want here. > > > > > > > > > > I personally prefer to have 2 StreamConfigs to be clear to the user > which > > > > > parameters are actually going to be used and to be consistent with the > > > reverse > > > > > stream. > > > > > > > > They are all used. If the reverse stream configuration changes here, then > > APM > > > > will be reinitialized with a different reverse stream. > > > > > > > > The point is that if any part of the config changes, the whole thing gets > > > wiped > > > > and reinitialized. That's how it was before, except it was less obvious > > > > because the caller didn't pass in information about the reverse stream. It > > > seems > > > > like passing the reverse config here makes that more clear, not less. > > > > > > But before you could only change the reverse stream when calling the reverse > > > stream and it would be reinitialized then. I find that clearer, than having > to > > > specify (or use defaults) for a stream I am not calling at the moment. > > > > Calls to ProcessStream didn't change the reverse stream config, but they did > > reset its buffers and clear the current state when the forward stream config > > changed. Previously, the old reverse config was reused implicitly. Isn't it > > better to require clients to explicitly specify a new config when the reverse > > stream is reset rather than just using the old one? > > > > Clients will not have to specify new config or use defaults for the reverse > > stream. They will use the same ProcessingConfig object they used to create APM > > on calls to ProcessStream and AnalyzeReverseStream. I think calling code will > > look like: > > > > > > apm->Initialize(my_config_); > > ... > > apm->ProcessStream(my_input_, my_config_, my_output); > > ... > > apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); > > > > > > Compare that to > > apm->Initialize(my_config_); > > ... > > > > apm->ProcessStream(my_input_, my_config_->input_stream(), > > my_config_->output_stream(), my_output); > > ... > > apm->AnalyzeReverseStream(my_reverse_data_, my_config_->reverse_stream()); > > > > > > I'll change it if you really want me to. > > Personally I find the last one more consistent (ProcessStream and > AnalyzeReverseStream are the same) and clear. If you know you are going to reset > it, then you can always call Initialize again with the whole config, else you > only pass in what is needed. For example, an application that has no reverse > stream never has to create a config (not even the default one) for it. I'll change it, but just so we're clear: ProcessStream does reset the reverse stream, so the client would not need call initialize again with the whole config. Not only that, but if we're going to only pass "what is needed" we shouldn't be passing any config here. The client should call Initialize() to reinitialize with different config.
Changed ProcessStream API
lgtm
The CQ bit was checked by mgraczyk@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1226093007/200001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: presubmit on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/presubmit/builds/191)
mgraczyk@chromium.org changed reviewers: + pbos@chromium.org
pbos, I need owners approval for talk/media/webrtc/fakewebrtcvoiceengine.h PTAL when you have a chance.
pbos@webrtc.org changed reviewers: + pbos@webrtc.org
talk/ rs-lgtm
+ekmeyerson FYI.
Mostly minor things. Main question is around the accounting of the keyboard channel. https://codereview.webrtc.org/1226093007/diff/80001/webrtc/common_audio/wav_f... File webrtc/common_audio/wav_file.cc (left): https://codereview.webrtc.org/1226093007/diff/80001/webrtc/common_audio/wav_f... webrtc/common_audio/wav_file.cc:126: CHECK(CheckWavParameters(num_channels_, On 2015/07/10 00:33:36, mgraczyk wrote: > This check fails when the number of channels does not divide the buffet size. It > doesn't seem correct because we may have written a chunk that splits a frame > without writing all samples. The only thing that changes here is num_samples_ > and the check happens on Close. If an invalid number of total samples have been > written, we will catch it there. Agreed, this check seems misplaced. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... File webrtc/common_audio/audio_util_unittest.cc (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... webrtc/common_audio/audio_util_unittest.cc:123: const int kNumMultichannelFrames = 4; Same here (and possibly elsewhere?) kNumFrames or kSamplesPerChannel. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... webrtc/common_audio/audio_util_unittest.cc:212: downmixed); Indentation doesn't look right. Run clang-format. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... File webrtc/common_audio/include/audio_util.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:119: int num_multichannel_frames, Why "multichannel"? To differentiate from the "frame == block of audio" interpretation? That could still be misconstrued. I think we should go use num_frames or stick to the old samples_per_channel. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:122: assert(num_channels > 0); For new code we should use DCHECK. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.cc:32: switch (stream_config.num_channels()) { I see that this is identical to the previous behavior, but why not make it generic? return stream_config.num_channels(); Also, does the num_channels count really not include the keyboard channel? That seems counter-intuitive to me, but in any case should be documented in StreamConfig. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:470: int num_channels = 0, Document whether this includes the keyboard channel or not. (As mentioned elsewhere, I think it would be more intuitive to include it.) https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:471: bool has_keyboard = false) Document that the keyboard channel is last, if present. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:498: static int calculate_samples_per_channel(int sample_rate_hz) { calculate_frames
https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... File webrtc/common_audio/audio_util_unittest.cc (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... webrtc/common_audio/audio_util_unittest.cc:123: const int kNumMultichannelFrames = 4; On 2015/07/22 22:47:21, andrew wrote: > Same here (and possibly elsewhere?) kNumFrames or kSamplesPerChannel. Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/audi... webrtc/common_audio/audio_util_unittest.cc:212: downmixed); On 2015/07/22 22:47:21, andrew wrote: > Indentation doesn't look right. Run clang-format. Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... File webrtc/common_audio/include/audio_util.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:119: int num_multichannel_frames, On 2015/07/22 22:47:21, andrew wrote: > Why "multichannel"? To differentiate from the "frame == block of audio" > interpretation? That could still be misconstrued. > > I think we should go use num_frames or stick to the old samples_per_channel. Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/common_audio/incl... webrtc/common_audio/include/audio_util.h:122: assert(num_channels > 0); On 2015/07/22 22:47:21, andrew wrote: > For new code we should use DCHECK. Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.cc:32: switch (stream_config.num_channels()) { On 2015/07/22 22:47:21, andrew wrote: > I see that this is identical to the previous behavior, but why not make it > generic? > > return stream_config.num_channels(); > > Also, does the num_channels count really not include the keyboard channel? That > seems counter-intuitive to me, but in any case should be documented in > StreamConfig. Changed to be generic. I agree it makes sense for "num_channels" to match the number of channels passed by the caller. However there I it is simpler if num_channels does not include the keyboard channel for a few reasons. First, everywhere we currently use num_channels() assumes that the channel count does not include the keyboard. I think that indicates that the more natural usage is to have audio channel count and keyboard be distinct. Second, clients will have to specify "has_keyboard" anyway when there is a keyboard channel. It seems strange for them to need to indicate information about the keyboard channel in two places. Third, am I right in assuming that the output and reverse streams will never have a keyboard? If only the input stream has one, maybe it's best to make that the special case. Lastly, what should AudioProcessing::num_input_channels, AudioProcessing::num_output_channels, etc return? The number of channels including the keyboard or excluding? I believe that before this CL they returned the number of channels excluding the keyboard, so it seems inconsistent to ask clients to pass in one number yet return another from a method with the same name. What do you think? https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:470: int num_channels = 0, On 2015/07/22 22:47:21, andrew wrote: > Document whether this includes the keyboard channel or not. (As mentioned > elsewhere, I think it would be more intuitive to include it.) Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:471: bool has_keyboard = false) On 2015/07/22 22:47:21, andrew wrote: > Document that the keyboard channel is last, if present. Done. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:498: static int calculate_samples_per_channel(int sample_rate_hz) { On 2015/07/22 22:47:21, andrew wrote: > calculate_frames Done.
lgtm with the requested documentation added. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/audio_buffer.cc (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/audio_buffer.cc:32: switch (stream_config.num_channels()) { On 2015/07/23 00:16:54, mgraczyk wrote: > On 2015/07/22 22:47:21, andrew wrote: > > I see that this is identical to the previous behavior, but why not make it > > generic? > > > > return stream_config.num_channels(); > > > > Also, does the num_channels count really not include the keyboard channel? > That > > seems counter-intuitive to me, but in any case should be documented in > > StreamConfig. > > Changed to be generic. > > I agree it makes sense for "num_channels" to match the number of channels passed > by the caller. However there I it is simpler if num_channels does not include > the keyboard channel for a few reasons. > > First, everywhere we currently use num_channels() assumes that the channel count > does not include the keyboard. I think that indicates that the more natural > usage is to have audio channel count and keyboard be distinct. > > Second, clients will have to specify "has_keyboard" anyway when there is a > keyboard channel. It seems strange for them to need to indicate information > about the keyboard channel in two places. > > Third, am I right in assuming that the output and reverse streams will never > have a keyboard? If only the input stream has one, maybe it's best to make that > the special case. That's right. > > Lastly, what should AudioProcessing::num_input_channels, > AudioProcessing::num_output_channels, etc return? The number of channels > including the keyboard or excluding? I believe that before this CL they > returned the number of channels excluding the keyboard, so it seems inconsistent > to ask clients to pass in one number yet return another from a method with the > same name. > > What do you think? I agree with you. The last two arguments are particularly persuasive. https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:470: int num_channels = 0, On 2015/07/23 00:16:54, mgraczyk wrote: > On 2015/07/22 22:47:21, andrew wrote: > > Document whether this includes the keyboard channel or not. (As mentioned > > elsewhere, I think it would be more intuitive to include it.) > > Done. You missed these two documentation requests.
https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:470: int num_channels = 0, On 2015/07/23 00:49:07, andrew wrote: > On 2015/07/23 00:16:54, mgraczyk wrote: > > On 2015/07/22 22:47:21, andrew wrote: > > > Document whether this includes the keyboard channel or not. (As mentioned > > > elsewhere, I think it would be more intuitive to include it.) > > > > Done. > > You missed these two documentation requests. Sorry, shouldn't have said "Done". I have them locally but didn't want to upload until we agreed on what "num_channels" should mean. Now it's actually Done.
lgtm https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/include/audio_processing.h (right): https://codereview.webrtc.org/1226093007/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/include/audio_processing.h:470: int num_channels = 0, On 2015/07/23 01:05:49, mgraczyk wrote: > On 2015/07/23 00:49:07, andrew wrote: > > On 2015/07/23 00:16:54, mgraczyk wrote: > > > On 2015/07/22 22:47:21, andrew wrote: > > > > Document whether this includes the keyboard channel or not. (As mentioned > > > > elsewhere, I think it would be more intuitive to include it.) > > > > > > Done. > > > > You missed these two documentation requests. > > Sorry, shouldn't have said "Done". I have them locally but didn't want to upload > until we agreed on what "num_channels" should mean. Ah, right. Looks great.
The CQ bit was checked by mgraczyk@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from pbos@webrtc.org, aluebs@webrtc.org Link to the patchset: https://codereview.webrtc.org/1226093007/#ps240001 (title: "Fix docs")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1226093007/240001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1226093007/240001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: android on tryserver.webrtc (JOB_TIMED_OUT, no build URL) android_rel on tryserver.webrtc (JOB_TIMED_OUT, no build URL)
Message was sent while issue was closed.
Committed patchset #13 (id:240001) manually as c204754b7a0cc801c70e8ce6c689f57f6ce00b3b (presubmit successful).
Message was sent while issue was closed.
A revert of this CL (patchset #13 id:240001) has been created in https://codereview.webrtc.org/1253573005/ by magjed@webrtc.org. The reason for reverting is: Breaks Chromium FYI content_browsertest on all platforms. The testcase that fails is WebRtcAecDumpBrowserTest.CallWithAecDump. https://build.chromium.org/p/chromium.webrtc.fyi/builders/Linux/builds/19388 Sample output: [ RUN ] WebRtcAecDumpBrowserTest.CallWithAecDump Xlib: extension "RANDR" missing on display ":9". [4:14:0722/211548:1282124453:WARNING:webrtcvoiceengine.cc(472)] Unexpected codec: ISAC/48000/1 (105) [4:14:0722/211548:1282124593:WARNING:webrtcvoiceengine.cc(472)] Unexpected codec: PCMU/8000/2 (110) [4:14:0722/211548:1282124700:WARNING:webrtcvoiceengine.cc(472)] Unexpected codec: PCMA/8000/2 (118) [4:14:0722/211548:1282124815:WARNING:webrtcvoiceengine.cc(472)] Unexpected codec: G722/8000/2 (119) [19745:19745:0722/211548:1282133667:INFO:CONSOLE(64)] "Looking at video in element remote-view-1", source: http://127.0.0.1:48819/media/webrtc_test_utilities.js (64) [19745:19745:0722/211548:1282136892:INFO:CONSOLE(64)] "Looking at video in element remote-view-2", source: http://127.0.0.1:48819/media/webrtc_test_utilities.js (64) ../../content/test/webrtc_content_browsertest_base.cc:62: Failure Value of: ExecuteScriptAndExtractString( shell()->web_contents(), javascript, &result) Actual: false Expected: true Failed to execute javascript call({video: true, audio: true});. From javascript: (nothing) When executing 'call({video: true, audio: true});' ../../content/test/webrtc_content_browsertest_base.cc:75: Failure Failed ../../content/browser/media/webrtc_aecdump_browsertest.cc:26: Failure Expected: (base::kNullProcessId) != (*id), actual: 0 vs 0 ../../content/browser/media/webrtc_aecdump_browsertest.cc:95: Failure Value of: GetRenderProcessHostId(&render_process_id) Actual: false Expected: true ../../content/browser/media/webrtc_aecdump_browsertest.cc:99: Failure Value of: base::PathExists(dump_file) Actual: false Expected: true ../../content/browser/media/webrtc_aecdump_browsertest.cc:101: Failure Value of: base::GetFileSize(dump_file, &file_size) Actual: false Expected: true ../../content/browser/media/webrtc_aecdump_browsertest.cc:102: Failure Expected: (file_size) > (0), actual: 0 vs 0 [ FAILED ] WebRtcAecDumpBrowserTest.CallWithAecDump, where TypeParam = and GetParam() = (361 ms) . |