|
|
Created:
5 years, 2 months ago by Andrew MacDonald Modified:
5 years, 1 month ago CC:
webrtc-reviews_webrtc.org, peah-webrtc, tterriberry_mozilla.com, hlundin-webrtc, kwiberg-webrtc, the sun, aluebs-webrtc, bjornv1 Base URL:
https://chromium.googlesource.com/external/webrtc.git@master Target Ref:
refs/pending/heads/master Project:
webrtc Visibility:
Public. |
DescriptionAdd aecdump support to audioproc_f.
Add a new interface to abstract away file operations. This CL temporarily
removes support for dumping the output of reverse streams. It will be easy to
restore in the new framework, although we may decide to only allow it with
the aecdump format.
We also now require the user to specify the output format, rather than
defaulting to the input format.
TEST=Bit-exact output to the previous audioproc_f version using an input wav
file, and to the legacy audioproc using an aecdump file.
Committed: https://crrev.com/bdafe31b86e9819b0adb9041f87e6194b7422b08
Cr-Commit-Position: refs/heads/master@{#10460}
Patch Set 1 #
Total comments: 92
Patch Set 2 : Add TickIntervalStats and clarify some documentation. #
Total comments: 2
Patch Set 3 : Back to ProcessChunk. #Patch Set 4 : Add ChannelBufferWavReader. #Patch Set 5 : Rebase. #Patch Set 6 : Fix compile error. #Patch Set 7 : Minor fixes. #Patch Set 8 : Rebase. #Messages
Total messages: 39 (11 generated)
Description was changed from ========== wip Add aecdump support to audioproc_f. ========== to ========== Add aecdump support to audioproc_f. Add a new interface to abstract away file operations. This CL temporarily removes support for dumping the output of reverse streams. It will be easy to restore in the new framework, although we may decide to only allow it with the aecdump format. We also now require the user to specify the output format, rather than defaulting to the input format. TEST=Bit-exact output to the previous audioproc_f version using an input wav file, and to the legacy audioproc using an aecdump file. ==========
Patchset #1 (id:1) has been deleted
Patchset #1 (id:20001) has been deleted
Patchset #1 (id:40001) has been deleted
andrew@webrtc.org changed reviewers: + aluebs@webrtc.org, peah@webrtc.org
Really nice that this work is started!!! Great! https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:37: CheckedDivExact(file.sample_rate(), AudioFileProcessor::kChunksPerSecond), It would be good to be able to process wav files of any length, most likely by padding to an even number of chunks before processing and then shortening the output to be of the same length as the original input. If the requirement of a length of an even number of chunks is imposed this means that preprocessing is required in order to be able to apply the program to any type of wav file. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:39: // Process one AuioProcessing::kChunkSizeMs of data from the input file. Typo: Should be AudioProcessing https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; I think another name is needed for this method. What it really does is to produce 1 frame out processed microphone data, right? Or should it rather be to receive 1 frame of microphone data (currently this is the same but that need not always be the same and therefore the distinction matters for the naming). We should at least name it so that it is distinguished from processing a chunk of reverse data. Furthermore, the naming should indicate that it contains all processing of supporting APM api calls, such as set_stream_delay, that are in the aecdump recording. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (left): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:29: DEFINE_string(o, "out.wav", "Name of the output file to write to."); This description should be made more precise. For instance: Name of the output file to write the processed capture stream to. Same goes for the description of the "i" parameter. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:73: if (FLAGS_dump.empty() && (FLAGS_aec || FLAGS_ie)) { Most likely, it would make sense here to try opening the file as well to ensure that the file-name is valid. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:100: ap->gain_control()->set_mode(GainControl::kFixedDigital)); Here I think we also need to set the analog gain control mode, as we otherwise cannot properly recreate the behavior for aecdump recordings. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:119: ap.Pass(), fopen(FLAGS_dump.c_str(), "rb"), out_file.Pass())); I think this fopen should be done outside the construction, as the current way of doing it will give rather uninformative error messages if the file cannot be opened. Also, I think it would make sense to already now add proper handling of that case. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { I'd actually rather put this while loop in the processor. The main reason is that it would when using wav-files as input significantly simplify the simulation of scenarios with clock-drift and the simulation of scenarios when the render and capture calls are unbalanced over time, e.g., rrccrcrrrccr instead of rcrcrcrcrc. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:129: int64_t execution_time_ms = processor->processing_time_ms(); It would be nice also to have the max and min computing times reported here, as peakyness in the complexity can be a big problem and that could perhaps be visible here.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:37: CheckedDivExact(file.sample_rate(), AudioFileProcessor::kChunksPerSecond), On 2015/10/20 21:17:25, peah-webrtc wrote: > It would be good to be able to process wav files of any length, most likely by > padding to an even number of chunks before processing and then shortening the > output to be of the same length as the original input. > > If the requirement of a length of an even number of chunks is imposed this means > that preprocessing is required in order to be able to apply the program to any > type of wav file. This line is just about computing the buffer size, which the AudioProcessing interface constrains to 10 ms anyway. aecdumps may only contain full buffers, so we don't have to worry there. For WAV input, this continues processing until we don't have a full buffer. So it will process non-10ms-multiple files, but drop the last < 10 ms chunk, which I think is reasonable behavior. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:39: // Process one AuioProcessing::kChunkSizeMs of data from the input file. On 2015/10/20 21:17:25, peah-webrtc wrote: > Typo: Should be AudioProcessing Done. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/20 21:17:25, peah-webrtc wrote: > I think another name is needed for this method. What it really does is to > produce 1 frame out processed microphone data, right? Or should it rather be to > receive 1 frame of microphone data (currently this is the same but that need not > always be the same and therefore the distinction matters for the naming). > We should at least name it so that it is distinguished from processing a chunk > of reverse data. The name is intentionally ambiguous in order for implementations to have different interpretations. WavFileProcessor reads one chunk from the input WAV file, processes it and writes it to the output WAV file. AecDumpProcessor reads events until it processes its first stream event and writes that to the output WAV file. We might have a processor in the future that only writes reverse chunks, or (more likely) one that writes both reverse and forward chunks. I changed to ProcessAndWriteChunk to be a bit more self-documenting. I've updated the implementation documentation with the above to make it clear. > > Furthermore, the naming should indicate that it contains all processing of > supporting APM api calls, such as set_stream_delay, that are in the aecdump > recording. As above, that won't happen for all implementations, but I added a note to AecDumpFileProcessor. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (left): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:29: DEFINE_string(o, "out.wav", "Name of the output file to write to."); On 2015/10/20 21:17:25, peah-webrtc wrote: > This description should be made more precise. For instance: Name of the output > file to write the processed capture stream to. Same goes for the description of > the "i" parameter. Done. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:73: if (FLAGS_dump.empty() && (FLAGS_aec || FLAGS_ie)) { On 2015/10/20 21:17:25, peah-webrtc wrote: > Most likely, it would make sense here to try opening the file as well to ensure > that the file-name is valid. See comments below. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:100: ap->gain_control()->set_mode(GainControl::kFixedDigital)); On 2015/10/20 21:17:25, peah-webrtc wrote: > Here I think we also need to set the analog gain control mode, as we otherwise > cannot properly recreate the behavior for aecdump recordings. Thanks for noticing this. It actually has no effect, because we use non-legacy aka "experimental" agc by default, which disregards these modes. I removed this. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:119: ap.Pass(), fopen(FLAGS_dump.c_str(), "rb"), out_file.Pass())); On 2015/10/20 21:17:25, peah-webrtc wrote: > I think this fopen should be done outside the construction, as the current way > of doing it will give rather uninformative error messages if the file cannot be > opened. Also, I think it would make sense to already now add proper handling of > that case. There are a few advantages to doing it this way: 1. AecDumpFileProcessor takes ownership of the FILE*; by using a temporary we don't have to think about that. 2. Limits the scope of the FILE* in audioproc_float.cc. The error message isn't all that bad (after I added a string): # # Fatal error in ../../webrtc/modules/audio_processing/test/audio_file_processor.cc, line 85 # Check failed: dump_file_ # Could not open dump file for reading. # Do you think that's bad enough to warrant explicitly checking for it at the top? I would open and close the file in that case. Similarly the input WAV file not existing results in: # # Fatal error in ../../webrtc/common_audio/wav_file.cc, line 42 # Check failed: file_handle_ # Could not open wav file for reading. # https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/20 21:17:25, peah-webrtc wrote: > I'd actually rather put this while loop in the processor. The main reason is > that it would when using wav-files as input significantly simplify the > simulation of scenarios with clock-drift and the simulation of scenarios when > the render and capture calls are unbalanced over time, e.g., rrccrcrrrccr > instead of rcrcrcrcrc. I wasn't planning to even permit running such scenarios here, but I'd like to get your opinion on that. The initial intent of audioproc_f was to trim down the feature set of audioproc to prevent it from becoming such an unwieldy beast. Arden was asking about how to run such simulation cases recently too. I proposed writing a small tool to convert simulation files into aecdumps. That way we can keep the complexity of the simulation case confined to one place. What do you think? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:129: int64_t execution_time_ms = processor->processing_time_ms(); On 2015/10/20 21:17:25, peah-webrtc wrote: > It would be nice also to have the max and min computing times reported here, as > peakyness in the complexity can be a big problem and that could perhaps be > visible here. Good point, done. I typically don't think the max is very useful, because it can very easily be due to a context switch rather than any peakiness in the complexity. A histogram would probably be most useful. I'm not going to add it now, but the TickIntervalStats setup would make it fairly easy to do that in the future. https://codereview.webrtc.org/1409943002/diff/80001/webrtc/system_wrappers/in... File webrtc/system_wrappers/interface/tick_util.h (right): https://codereview.webrtc.org/1409943002/diff/80001/webrtc/system_wrappers/in... webrtc/system_wrappers/interface/tick_util.h:86: explicit TickInterval(int64_t interval); Using this constructor is the most natural way for me to specify a min upper bound and I don't see a reason for it not to be public.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:37: CheckedDivExact(file.sample_rate(), AudioFileProcessor::kChunksPerSecond), A, sorry about that. Now I see! Great! The behavior you described for non-10ms-multiple files is super! https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/21 00:29:28, Andrew MacDonald wrote: > On 2015/10/20 21:17:25, peah-webrtc wrote: > > I think another name is needed for this method. What it really does is to > > produce 1 frame out processed microphone data, right? Or should it rather be > to > > receive 1 frame of microphone data (currently this is the same but that need > not > > always be the same and therefore the distinction matters for the naming). > > We should at least name it so that it is distinguished from processing a chunk > > of reverse data. > > The name is intentionally ambiguous in order for implementations to have > different interpretations. WavFileProcessor reads one chunk from the input WAV > file, processes it and writes it to the output WAV file. AecDumpProcessor reads > events until it processes its first stream event and writes that to the output > WAV file. We might have a processor in the future that only writes reverse > chunks, or (more likely) one that writes both reverse and forward chunks. > > I changed to ProcessAndWriteChunk to be a bit more self-documenting. > > I've updated the implementation documentation with the above to make it clear. > > > > > Furthermore, the naming should indicate that it contains all processing of > > supporting APM api calls, such as set_stream_delay, that are in the aecdump > > recording. > > As above, that won't happen for all implementations, but I added a note to > AecDumpFileProcessor. > I think ProcessChunk was better than ProcessAndWriteChunk because of the vagueness. You have a good point that it makes sense for it to be ambiguous But I'd actually then just call it Process because that is what it really does. ProcessChunk typically actually involves processing several chunks (render+capture) and the "chunk" is specified to be 10 ms of data. I'm for sure ok with ProcessChunk though but I think it is a bit misleading. I still think ProcessAndWriteChunk is not really correct. I'd rather call it ProcessCaptureFrame, or in light of that its purpose is a ambiguous and it actually can process maybe just Process https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:100: ap->gain_control()->set_mode(GainControl::kFixedDigital)); On 2015/10/21 00:29:28, Andrew MacDonald wrote: > On 2015/10/20 21:17:25, peah-webrtc wrote: > > Here I think we also need to set the analog gain control mode, as we otherwise > > cannot properly recreate the behavior for aecdump recordings. > > Thanks for noticing this. It actually has no effect, because we use non-legacy > aka "experimental" agc by default, which disregards these modes. I removed this. Great! Did not know that. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:119: ap.Pass(), fopen(FLAGS_dump.c_str(), "rb"), out_file.Pass())); On 2015/10/21 00:29:28, Andrew MacDonald wrote: > On 2015/10/20 21:17:25, peah-webrtc wrote: > > I think this fopen should be done outside the construction, as the current way > > of doing it will give rather uninformative error messages if the file cannot > be > > opened. Also, I think it would make sense to already now add proper handling > of > > that case. > > There are a few advantages to doing it this way: > 1. AecDumpFileProcessor takes ownership of the FILE*; by using a temporary we > don't have to think about that. > 2. Limits the scope of the FILE* in audioproc_float.cc. > > The error message isn't all that bad (after I added a string): > # > # Fatal error in > ../../webrtc/modules/audio_processing/test/audio_file_processor.cc, line 85 > # Check failed: dump_file_ > # Could not open dump file for reading. > # > > Do you think that's bad enough to warrant explicitly checking for it at the top? > I would open and close the file in that case. > > Similarly the input WAV file not existing results in: > # > # Fatal error in ../../webrtc/common_audio/wav_file.cc, line 42 > # Check failed: file_handle_ > # Could not open wav file for reading. > # Those are very valid points! And the error message is quite ok! Good. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/21 00:29:28, Andrew MacDonald wrote: > On 2015/10/20 21:17:25, peah-webrtc wrote: > > I'd actually rather put this while loop in the processor. The main reason is > > that it would when using wav-files as input significantly simplify the > > simulation of scenarios with clock-drift and the simulation of scenarios when > > the render and capture calls are unbalanced over time, e.g., rrccrcrrrccr > > instead of rcrcrcrcrc. > > I wasn't planning to even permit running such scenarios here, but I'd like to > get your opinion on that. > > The initial intent of audioproc_f was to trim down the feature set of audioproc > to prevent it from becoming such an unwieldy beast. Arden was asking about how > to run such simulation cases recently too. I proposed writing a small tool to > convert simulation files into aecdumps. That way we can keep the complexity of > the simulation case confined to one place. What do you think? Hmm, my maybe that is a good way to do it. It makes sense but my main concern is that two boilerplate codes of command line parsing will then need to be maintained. My suggestion would be to have one common file that contains two layers, one for parsing the command line and does any sanity checks/timer initialization/etc. This then selectively calls two methods in the second layer, one for the aecdump and one for wav-file-based-call synthesis. In my mind the unwieldyness comes from -having to support the command line arguments and the different settings, but that we need anyway for both. -needing to synthesize the call chain, but that we can separate into a different file. The main drawback to having two tools for this is that they will diverge which will make them a pain to maintain. What do you think? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:129: int64_t execution_time_ms = processor->processing_time_ms(); On 2015/10/21 00:29:28, Andrew MacDonald wrote: > On 2015/10/20 21:17:25, peah-webrtc wrote: > > It would be nice also to have the max and min computing times reported here, > as > > peakyness in the complexity can be a big problem and that could perhaps be > > visible here. > > Good point, done. > > I typically don't think the max is very useful, because it can very easily be > due to a context switch rather than any peakiness in the complexity. > > A histogram would probably be most useful. I'm not going to add it now, but the > TickIntervalStats setup would make it fairly easy to do that in the future. That makes sense. If we can get a histogram, that is for sure the best.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/21 08:10:04, peah-webrtc wrote: > I think ProcessChunk was better than ProcessAndWriteChunk because of the > vagueness. You have a good point that it makes sense for it to be ambiguous But > I'd actually then just call it Process because that is what it really does. > ProcessChunk typically actually involves processing several chunks > (render+capture) and the "chunk" is specified to be 10 ms of data. > I'm for sure ok with ProcessChunk though but I think it is a bit misleading. > > > > I still think ProcessAndWriteChunk is not really correct. I'd rather call it > ProcessCaptureFrame, or in light of that its purpose is a ambiguous and it > actually can process maybe just Process Except that it might not process a capture chunk at all, or it might process both render and capture chunks. Process feels a little too ambiguous, because it gives no impression about how much is being processed. How about ProcessSome or ProcessSomeData? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/21 08:10:04, peah-webrtc wrote: > Hmm, my maybe that is a good way to do it. It makes sense but my main concern is > that two boilerplate codes of command line parsing will then need to be > maintained. > > My suggestion would be to have one common file that contains two layers, one for > parsing the command line and does any sanity checks/timer initialization/etc. > This then selectively calls two methods in the second layer, one for the aecdump > and one for wav-file-based-call synthesis. This is almost exactly what I have in this CL, except that I'm calling a single chunk method in a loop rather than processing the whole file in one shot. I did it this way because I expected there to be more shared functionality happening in the loop here. As it turns out, we only need to count the chunks and update the trace time override. I'd prefer to leave it this way for now. It's easy to refactor into a one-shot method if it becomes clearly superior in the future. > > In my mind the unwieldyness comes from > -having to support the command line arguments and the different settings, but > that we need anyway for both. > -needing to synthesize the call chain, but that we can separate into a different > file. > > The main drawback to having two tools for this is that they will diverge which > will make them a pain to maintain. > > What do you think? My proposal for a simulation -> aecdump converter tool wouldn't repeat any of the command-line flags from here. Essentially, you would provide it capture and render files and some parameters about delay and distribution of calls to render and capture and it would spit out an aecdump file. I think we should add support in audioproc_f for passing in capture and render files and enabling AEC and intelligibility enhancement (IE) processing of them. But it shouldn't accept delay, clock drift, order of calls etc. If you want that fanciness, you should have to provide an aecdump.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/22 00:12:09, Andrew MacDonald wrote: > On 2015/10/21 08:10:04, peah-webrtc wrote: > > I think ProcessChunk was better than ProcessAndWriteChunk because of the > > vagueness. You have a good point that it makes sense for it to be ambiguous > But > > I'd actually then just call it Process because that is what it really does. > > ProcessChunk typically actually involves processing several chunks > > (render+capture) and the "chunk" is specified to be 10 ms of data. > > I'm for sure ok with ProcessChunk though but I think it is a bit misleading. > > > > > > > > I still think ProcessAndWriteChunk is not really correct. I'd rather call it > > ProcessCaptureFrame, or in light of that its purpose is a ambiguous and it > > actually can process maybe just Process > > Except that it might not process a capture chunk at all, or it might process > both render and capture chunks. Process feels a little too ambiguous, because it > gives no impression about how much is being processed. How about ProcessSome or > ProcessSomeData? Then the loop does not make sense at all. As it is now, the only reason to have the loop is to count the number of chunks, and with the other names we need to drop the chunk counter and then I cannot see any reason at all why the loop should be present in this file. I definitely see your points about ambiguousness, but I think one thing we can rely on is that the processing will be frame-based in some manner and in that case the loop makes sense if it is over frames. Or did you intend the loop to possibly be over protobuf messages (i.e., one loop iteration per message)? Another suggestion is ProcessFrame (but that is very similar to ProcessChunk). How about that? Btw, regarding chunk, it is a replacement for "frame", right? Or is this a wider term in the sense that it can have any size? I think that it is 10 ms in all places I've seen it. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/22 00:12:09, Andrew MacDonald wrote: > On 2015/10/21 08:10:04, peah-webrtc wrote: > > Hmm, my maybe that is a good way to do it. It makes sense but my main concern > is > > that two boilerplate codes of command line parsing will then need to be > > maintained. > > > > My suggestion would be to have one common file that contains two layers, one > for > > parsing the command line and does any sanity checks/timer initialization/etc. > > This then selectively calls two methods in the second layer, one for the > aecdump > > and one for wav-file-based-call synthesis. > > This is almost exactly what I have in this CL, except that I'm calling a single > chunk method in a loop rather than processing the whole file in one shot. I did > it this way because I expected there to be more shared functionality happening > in the loop here. As it turns out, we only need to count the chunks and update > the trace time override. > > I'd prefer to leave it this way for now. It's easy to refactor into a one-shot > method if it becomes clearly superior in the future. > > > > > In my mind the unwieldyness comes from > > -having to support the command line arguments and the different settings, but > > that we need anyway for both. > > -needing to synthesize the call chain, but that we can separate into a > different > > file. > > > > The main drawback to having two tools for this is that they will diverge which > > will make them a pain to maintain. > > > > What do you think? > > My proposal for a simulation -> aecdump converter tool wouldn't repeat any of > the command-line flags from here. Essentially, you would provide it capture and > render files and some parameters about delay and distribution of calls to render > and capture and it would spit out an aecdump file. > > I think we should add support in audioproc_f for passing in capture and render > files and enabling AEC and intelligibility enhancement (IE) processing of them. > But it shouldn't accept delay, clock drift, order of calls etc. If you want that > fanciness, you should have to provide an aecdump. Answers inline below: > I'd prefer to leave it this way for now. It's easy to refactor into a one-shot > method if it becomes clearly superior in the future. You are correct that it can be changed later if needed, and I don't have a strong opinion about it apart from that I think it would be better not to separate out the for-loop. But let's keep that. > My proposal for a simulation -> aecdump converter tool wouldn't repeat any of > the command-line flags from here. Essentially, you would provide it capture and > render files and some parameters about delay and distribution of calls to render > and capture and it would spit out an aecdump file. > > I think we should add support in audioproc_f for passing in capture and render > files and enabling AEC and intelligibility enhancement (IE) processing of them. > But it shouldn't accept delay, clock drift, order of calls etc. If you want that > fanciness, you should have to provide an aecdump. I think your idea about the simulation generating tool is great. That approach makes sense for sure and if I understand it correctly there will be no overlap at all in command line options. Sounds super! Do I understand it correctly, that that tool would take care of any scenario related options, such as drift, call order, sample rate, number of channels, etc, and that this tool only should allow setting parameters defining the operation of the APM, such as enable aec, set_suppression_level, etc? With that approach, however, I think it would be better to remove the ability for audioproc_f to process wav-files. The main reason for that is that if we have that as well in this tool, we will need to ensure that the default call-chains in the audioproc_f and simulation generating tools are the both kept the same. Furthermore, we need to ensure that they are kept the same as they are used in WebRTC/Chromium which I think significantly increases the risk of any of them diverging. So anyone doing an API change in APM would need to update that in 4 places and I think there is a high risk that one of these test files will be missed. We should keep that risk as low as possible and therefore I think it would make sense to only have the wav-file support in one file. WDYT?
Let me know if we should discuss this over Hangouts in STO evening time. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/22 04:50:40, peah-webrtc wrote: > On 2015/10/22 00:12:09, Andrew MacDonald wrote: > > Except that it might not process a capture chunk at all, or it might process > > both render and capture chunks. Process feels a little too ambiguous, because > it > > gives no impression about how much is being processed. How about ProcessSome > or > > ProcessSomeData? > > Then the loop does not make sense at all. As it is now, the only reason to have > the loop is to count the number of chunks, and with the other names we need to > drop the chunk counter and then I cannot see any reason at all why the loop > should be present in this file. And update the trace time overrider. My point is that the loop-type interface is more flexible, and introduces one extra line of code for the caller. I think the tradeoff clearly prefers the loop-type. > > I definitely see your points about ambiguousness, but I think one thing we can > rely on is that the processing will be frame-based in some manner and in that > case the loop makes sense if it is over frames. Or did you intend the loop to > possibly be over protobuf messages (i.e., one loop iteration per message)? Hmm. Not sure exactly what you mean. There are both frame-based (aka chunk-based) and message-based implementations below. > > Another suggestion is ProcessFrame (but that is very similar to ProcessChunk). > How about that? > > > Btw, regarding chunk, it is a replacement for "frame", right? Or is this a wider > term in the sense that it can have any size? I think that it is 10 ms in all > places I've seen it. Here's the nomenclature we've been trying to use: frame: all samples recorded at a time instant (analogous to a video frame) chunk: arbitrary number of frames This interpretation of frame is fairly standard. Pervasive in Chromium and webkit/blink. Some more references: https://developer.apple.com/library/ios/documentation/MusicAudio/Reference/Co... https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Basic_concepts... https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/22 04:50:40, peah-webrtc wrote: > On 2015/10/22 00:12:09, Andrew MacDonald wrote: > > I'd prefer to leave it this way for now. It's easy to refactor into a one-shot > > method if it becomes clearly superior in the future. > You are correct that it can be changed later if needed, and I don't have a > strong opinion about it apart from that I think it would be better not to > separate out the for-loop. But let's keep that. > > > My proposal for a simulation -> aecdump converter tool wouldn't repeat any of > > the command-line flags from here. Essentially, you would provide it capture > and > > render files and some parameters about delay and distribution of calls to > render > > and capture and it would spit out an aecdump file. > > > > I think we should add support in audioproc_f for passing in capture and render > > files and enabling AEC and intelligibility enhancement (IE) processing of > them. > > But it shouldn't accept delay, clock drift, order of calls etc. If you want > that > > fanciness, you should have to provide an aecdump. > I think your idea about the simulation generating tool is great. That approach > makes sense for sure and if I understand it correctly there will be no overlap > at all in command line options. Sounds super! > Do I understand it correctly, that that tool would take care of any scenario > related options, such as drift, call order, sample rate, number of channels, > etc, and that this tool only should allow setting parameters defining the > operation of the APM, such as enable aec, set_suppression_level, etc? Correct. > > With that approach, however, I think it would be better to remove the ability > for audioproc_f to process wav-files. The main reason for that is that if we > have that as well in this tool, we will need to ensure that the default > call-chains in the audioproc_f and simulation generating tools are the both kept > the same. WavFileProcessor has no "default call-chain". Check it out. It's one call to ProcessStream. > Furthermore, we need to ensure that they are kept the same as they are > used in WebRTC/Chromium which I think significantly increases the risk of any of > them diverging. So anyone doing an API change in APM would need to update that > in 4 places and I think there is a high risk that one of these test files will > be missed. As above, in general, nothing should need to change in WavFileProcessor as new APIs are added. > We should keep that risk as low as possible and therefore I think it would make > sense to only have the wav-file support in one file. WDYT? Most components (i.e. beamforming, noise suppression) don't need any run-time information beyond the input WAV file. I would hate to require users (me) to run a second tool needlessly for this case.
lgtm https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/22 05:11:55, Andrew MacDonald wrote: > On 2015/10/22 04:50:40, peah-webrtc wrote: > > On 2015/10/22 00:12:09, Andrew MacDonald wrote: > > > I'd prefer to leave it this way for now. It's easy to refactor into a > one-shot > > > method if it becomes clearly superior in the future. > > You are correct that it can be changed later if needed, and I don't have a > > strong opinion about it apart from that I think it would be better not to > > separate out the for-loop. But let's keep that. > > > > > My proposal for a simulation -> aecdump converter tool wouldn't repeat any > of > > > the command-line flags from here. Essentially, you would provide it capture > > and > > > render files and some parameters about delay and distribution of calls to > > render > > > and capture and it would spit out an aecdump file. > > > > > > I think we should add support in audioproc_f for passing in capture and > render > > > files and enabling AEC and intelligibility enhancement (IE) processing of > > them. > > > But it shouldn't accept delay, clock drift, order of calls etc. If you want > > that > > > fanciness, you should have to provide an aecdump. > > I think your idea about the simulation generating tool is great. That approach > > makes sense for sure and if I understand it correctly there will be no overlap > > at all in command line options. Sounds super! > > Do I understand it correctly, that that tool would take care of any scenario > > related options, such as drift, call order, sample rate, number of channels, > > etc, and that this tool only should allow setting parameters defining the > > operation of the APM, such as enable aec, set_suppression_level, etc? > > Correct. > > > > > With that approach, however, I think it would be better to remove the ability > > for audioproc_f to process wav-files. The main reason for that is that if we > > have that as well in this tool, we will need to ensure that the default > > call-chains in the audioproc_f and simulation generating tools are the both > kept > > the same. > > WavFileProcessor has no "default call-chain". Check it out. It's one call to > ProcessStream. > > > Furthermore, we need to ensure that they are kept the same as they are > > used in WebRTC/Chromium which I think significantly increases the risk of any > of > > them diverging. So anyone doing an API change in APM would need to update that > > in 4 places and I think there is a high risk that one of these test files will > > be missed. > > As above, in general, nothing should need to change in WavFileProcessor as new > APIs are added. > > > We should keep that risk as low as possible and therefore I think it would > make > > sense to only have the wav-file support in one file. WDYT? > > Most components (i.e. beamforming, noise suppression) don't need any run-time > information beyond the input WAV file. I would hate to require users (me) to run > a second tool needlessly for this case. Ok, I yield :-). It is indeed as you point out a oneliner so lets keep it for that usecase.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; I'd like to settle on a name you're happy with (or at least, least unhappy with ;) ProcessChunk, ProcessSome, ProcessSomeData, something else? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:123: while (processor->ProcessChunk()) { On 2015/10/22 14:38:30, peah-webrtc wrote: > Ok, I yield :-). It is indeed as you point out a oneliner so lets keep it for > that usecase. :-)
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/22 16:38:20, Andrew MacDonald wrote: > I'd like to settle on a name you're happy with (or at least, least unhappy with > ;) > > ProcessChunk, ProcessSome, ProcessSomeData, something else? From the link I definitely see the risk of confusion if we use frame. The link also says "When an audio signal is processed, sampling means the conversion of a continuous signal to a discrete signal; put another way, a continuous sound wave such as a band playing live is converted to a sequence of samples (a discrete-time signal) that allow a computer to handle the audio in distinct blocks. A sample is a single value or set of multiple values at a specific point in time (and space in the case of spatialisation.)" What about ProcessBlock? Otherwise ProcessChunk is fine. I for sure think the for loop makes sense, but what I meant was that it does not make sense if it is used with the wrong name. In APM, output samples are produced in blocks (frames) at requests of an external user that operates based on a certain clock, and requests samples in integer multiples of the APM block size based on that clock. In my mind, it makes sense for a simulator/call recreator to operate similarly to that, in terms of blocks. Therefore I think the for-loop should be somehow based on blocks, regardless of whether it is called chunk, block or frame. An alternative is to loop only over protobuf messages. The current implementation is to have two nested loops, one outer over capture blocks, and one inner over the messages that are occurring before the next capture block is achieved. That is fully fine. https://codereview.webrtc.org/1409943002/diff/80001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/80001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:77: AecDumpFileProcessor::AecDumpFileProcessor(scoped_ptr<AudioProcessing> ap, One more comment on this: Would it be possible to split the AecDumpFileProcessor into a separate file? In that way it could be reused by other files.
Mr. Luebs, any comments? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:40: virtual bool ProcessChunk() = 0; On 2015/10/23 07:42:40, peah-webrtc wrote: > On 2015/10/22 16:38:20, Andrew MacDonald wrote: > > I'd like to settle on a name you're happy with (or at least, least unhappy > with > > ;) > > > > ProcessChunk, ProcessSome, ProcessSomeData, something else? > > From the link I definitely see the risk of confusion if we use frame. > > The link also says > "When an audio signal is processed, sampling means the conversion of a > continuous signal to a discrete signal; put another way, a continuous sound wave > such as a band playing live is converted to a sequence of samples (a > discrete-time signal) that allow a computer to handle the audio in distinct > blocks. A sample is a single value or set of multiple values at a specific point > in time (and space in the case of spatialisation.)" > > What about ProcessBlock? Otherwise ProcessChunk is fine. We've been using block to refer to the underlying size of audio a processing algorithm will use. Typically this refers to the FFT size. So ProcessChunk it is! > > I for sure think the for loop makes sense, but what I meant was that it does not > make sense if it is used with the wrong name. > In APM, output samples are produced in blocks (frames) at requests of an > external user that operates based on a certain clock, and requests samples in > integer multiples of the APM block size based on that clock. In my mind, it > makes sense for a simulator/call recreator to operate similarly to that, in > terms of blocks. > Therefore I think the for-loop should be somehow based on blocks, regardless of > whether it is called chunk, block or frame. > An alternative is to loop only over protobuf messages. > The current implementation is to have two nested loops, one outer over capture > blocks, and one inner over the messages that are occurring before the next > capture block is achieved. That is fully fine. Great, sounds good.
This is awesome! Thank you for doing it! :) Sorry for the slow response. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:73: buffer_writer_.Write(out_buf_); It would be awesome to have a ChannelBufferWavReader as well. This implementation would be much more straight forward. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:85: RTC_CHECK(dump_file_); RTC_DCHECK? And in the other places. Although this is only a test feature. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:108: break; This is not necessary, it will leave the dowhile loop anyway, right? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:135: StreamConfig(msg.reverse_sample_rate(), msg.num_reverse_channels()); reverse_sample_rate https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:144: RTC_CHECK_EQ(in_buf_->num_channels(), msg.input_channel_size()); The APM has support to change this dynamically. Why not take advantage of this? And for the reverse stream as well? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:149: std::memcpy(in_buf_->channels()[i], msg.input_channel(i).data(), Check the output of memcpy? And for the reverse stream as well? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:153: const auto st = ScopedTimer(processing_time()); Shouldn't this scope be much more tight around ProcessStream to reflect the real time? And for the reverse stream as well? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:178: const auto st = ScopedTimer(processing_time()); This will take into account in the same time capture and render times? Do we want to have 2 different timers? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:54: ~ScopedTimer() { *interval_ += TickTime::Now() - start_time_; } Don't you think it is more intuitive to have a method to stop the time, instead of letting the scope determine this? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:57: TickInterval* const interval_; What does this const mean? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:95: FILE* dump_file, Shouldn't this be a scoped_ptr? Else, you could pass in a string and fopen it internally. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:101: // completed. Doesn't this mess with the chunk number in process_float? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (left): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:35: "Output sample rate in Hz. Defaults to input."); Having the output format default to the input one is so intuitive and easy. Is this not possible with a dump file? Or why is this changing? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:135: FLAGS_o.c_str(), out_file.num_channels(), out_file.sample_rate()); Why removing these? I thought it was a nice feature to have the formats printed out. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:79: "FIXME(ajm): The transient suppression output is not dumped.\n"); What TS output? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.cc:37: void ChannelBufferWavWriter::Write(const ChannelBuffer<float>& buffer) { Check the sample rate and number of channels corresponds to the WavWriter? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.cc:115: RTC_CHECK_GT(num_mics, 0u); Maybe add a fancy error message here too? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.h:49: explicit ChannelBufferWavWriter(rtc::scoped_ptr<WavWriter> file); Why does this needs to be passed in? Can't we just pass in the file name and let ChannelBufferWavWriter create its own WavWriter? It could even create it lazily and take the sample rate and number of channels from the ChannelBuffer.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:73: buffer_writer_.Write(out_buf_); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > It would be awesome to have a ChannelBufferWavReader as well. This > implementation would be much more straight forward. I didn't bother because it's only used in one place (unlike CBWW which is used twice). But I think the symmetry is nice actually; done. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:85: RTC_CHECK(dump_file_); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > RTC_DCHECK? And in the other places. Although this is only a test feature. No, I want it to crash in release as well. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:108: break; On 2015/10/24 00:53:34, aluebs-webrtc wrote: > This is not necessary, it will leave the dowhile loop anyway, right? True, changed. I was using a different structure earlier that required this. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:135: StreamConfig(msg.reverse_sample_rate(), msg.num_reverse_channels()); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > reverse_sample_rate Did you intend to post this? Not sure what you mean. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:144: RTC_CHECK_EQ(in_buf_->num_channels(), msg.input_channel_size()); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > The APM has support to change this dynamically. Why not take advantage of this? > And for the reverse stream as well? We do take advantage of it. Whenever the stream format changes APM will record an initialization event *before* the stream event. We recreate the buffers while handling the initialization event and therefore expect the channels to always match here. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:149: std::memcpy(in_buf_->channels()[i], msg.input_channel(i).data(), On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Check the output of memcpy? And for the reverse stream as well? http://en.cppreference.com/w/cpp/string/byte/memcpy The return value is just |dest|. memcpy provides no safety. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:153: const auto st = ScopedTimer(processing_time()); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Shouldn't this scope be much more tight around ProcessStream to reflect the real > time? And for the reverse stream as well? I think it's fair to include the other API calls to APM in the execution time. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:178: const auto st = ScopedTimer(processing_time()); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > This will take into account in the same time capture and render times? Do we > want to have 2 different timers? We didn't before. That might be interesting, but I think it would be better to run a manual profile if you want that kind of granularity. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:54: ~ScopedTimer() { *interval_ += TickTime::Now() - start_time_; } On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Don't you think it is more intuitive to have a method to stop the time, instead > of letting the scope determine this? I find scoped objects quite intuitive (in webrtc, consider scoped_ptr, ScopedCriticalSection) but the main reason for doing this is to avoid making mistakes. You can't forget to stop the timer. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:57: TickInterval* const interval_; On 2015/10/24 00:53:34, aluebs-webrtc wrote: > What does this const mean? It means the pointer can't be reseated (its value is const). Contrast with "const T*", which means what the pointer points at is const. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:95: FILE* dump_file, On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Shouldn't this be a scoped_ptr? Else, you could pass in a string and fopen it > internally. It can't be a scoped_ptr unless I specialize the deleter. In its destructor, scoped_ptr calls delete on its underlying pointer. I considered adding such a ScopedFile, but decided it wasn't worth it. I'd be willing to add it if you prefer :) I used a FILE* rather than string to be consistent with WavFileProcessor (which also takes file objects rather than file names). https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:101: // completed. On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Doesn't this mess with the chunk number in process_float? No, because the chunk counter is indeed for the number of processed Stream (aka capture) chunks. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (left): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:135: FLAGS_o.c_str(), out_file.num_channels(), out_file.sample_rate()); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Why removing these? I thought it was a nice feature to have the formats printed > out. OK, added back. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:79: "FIXME(ajm): The transient suppression output is not dumped.\n"); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > What TS output? TS processes the reverse (aka render) stream. After my modification audioproc_f doesn't dump the processed reverse stream. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.cc:37: void ChannelBufferWavWriter::Write(const ChannelBuffer<float>& buffer) { On 2015/10/24 00:53:35, aluebs-webrtc wrote: > Check the sample rate and number of channels corresponds to the WavWriter? ChannelBuffer doesn't know about sample rate, but I can check the number of channels. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.cc:115: RTC_CHECK_GT(num_mics, 0u); On 2015/10/24 00:53:34, aluebs-webrtc wrote: > Maybe add a fancy error message here too? Done.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:73: buffer_writer_.Write(out_buf_); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > It would be awesome to have a ChannelBufferWavReader as well. This > > implementation would be much more straight forward. > > I didn't bother because it's only used in one place (unlike CBWW which is used > twice). But I think the symmetry is nice actually; done. Thank you for adding that! :) https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:85: RTC_CHECK(dump_file_); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > RTC_DCHECK? And in the other places. Although this is only a test feature. > > No, I want it to crash in release as well. Agreed. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:135: StreamConfig(msg.reverse_sample_rate(), msg.num_reverse_channels()); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > reverse_sample_rate > > Did you intend to post this? Not sure what you mean. I did actually intend to post this. I meant was, shouldn't it be: msg.reverse_sample_rate() ---> reverse_sample_rate For the case where !msg.has_reverse_sample_rate(). https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:144: RTC_CHECK_EQ(in_buf_->num_channels(), msg.input_channel_size()); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > The APM has support to change this dynamically. Why not take advantage of > this? > > And for the reverse stream as well? > > We do take advantage of it. Whenever the stream format changes APM will record > an initialization event *before* the stream event. We recreate the buffers while > handling the initialization event and therefore expect the channels to always > match here. Oh, I was not aware that the INIT msg was required then. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:149: std::memcpy(in_buf_->channels()[i], msg.input_channel(i).data(), On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > Check the output of memcpy? And for the reverse stream as well? > > http://en.cppreference.com/w/cpp/string/byte/memcpy > > The return value is just |dest|. memcpy provides no safety. My whole life was a lie! :O https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:153: const auto st = ScopedTimer(processing_time()); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > Shouldn't this scope be much more tight around ProcessStream to reflect the > real > > time? And for the reverse stream as well? > > I think it's fair to include the other API calls to APM in the execution time. I guess it depends on what you want to measure and in practice I don't think it would affect. So you can leave it as is. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:178: const auto st = ScopedTimer(processing_time()); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > This will take into account in the same time capture and render times? Do we > > want to have 2 different timers? > > We didn't before. That might be interesting, but I think it would be better to > run a manual profile if you want that kind of granularity. Makes sense. And if wanted, it would be better to do another CL anyway :) https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:54: ~ScopedTimer() { *interval_ += TickTime::Now() - start_time_; } On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > Don't you think it is more intuitive to have a method to stop the time, > instead > > of letting the scope determine this? > > I find scoped objects quite intuitive (in webrtc, consider scoped_ptr, > ScopedCriticalSection) but the main reason for doing this is to avoid making > mistakes. You can't forget to stop the timer. But you can forget to scope it right. But if you find them intuitive, I don't have a strong enough opinion to argue. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:57: TickInterval* const interval_; On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > What does this const mean? > > It means the pointer can't be reseated (its value is const). Contrast with > "const T*", which means what the pointer points at is const. Ok, thanks for clarifying. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:95: FILE* dump_file, On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > Shouldn't this be a scoped_ptr? Else, you could pass in a string and fopen it > > internally. > > It can't be a scoped_ptr unless I specialize the deleter. In its destructor, > scoped_ptr calls delete on its underlying pointer. I considered adding such a > ScopedFile, but decided it wasn't worth it. I'd be willing to add it if you > prefer :) > > I used a FILE* rather than string to be consistent with WavFileProcessor (which > also takes file objects rather than file names). Oh, I see. Agreed. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:101: // completed. On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > Doesn't this mess with the chunk number in process_float? > > No, because the chunk counter is indeed for the number of processed Stream (aka > capture) chunks. Good point. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:79: "FIXME(ajm): The transient suppression output is not dumped.\n"); On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > What TS output? > > TS processes the reverse (aka render) stream. After my modification audioproc_f > doesn't dump the processed reverse stream. You mean the Intelligibility Enhancer? https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.cc:37: void ChannelBufferWavWriter::Write(const ChannelBuffer<float>& buffer) { On 2015/10/29 00:44:50, Andrew MacDonald wrote: > On 2015/10/24 00:53:35, aluebs-webrtc wrote: > > Check the sample rate and number of channels corresponds to the WavWriter? > > ChannelBuffer doesn't know about sample rate, but I can check the number of > channels. Great point about sample rate. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.h:49: explicit ChannelBufferWavWriter(rtc::scoped_ptr<WavWriter> file); On 2015/10/24 00:53:35, aluebs-webrtc wrote: > Why does this needs to be passed in? Can't we just pass in the file name and let > ChannelBufferWavWriter create its own WavWriter? It could even create it lazily > and take the sample rate and number of channels from the ChannelBuffer. Missed this comment? I have the same question regarding the ChannelBufferWavReader.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:135: StreamConfig(msg.reverse_sample_rate(), msg.num_reverse_channels()); On 2015/10/29 01:03:19, aluebs-webrtc wrote: > On 2015/10/29 00:44:50, Andrew MacDonald wrote: > > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > > reverse_sample_rate > > > > Did you intend to post this? Not sure what you mean. > > I did actually intend to post this. I meant was, shouldn't it be: > msg.reverse_sample_rate() ---> reverse_sample_rate > For the case where !msg.has_reverse_sample_rate(). Aha, yes it should. Will fix. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:144: RTC_CHECK_EQ(in_buf_->num_channels(), msg.input_channel_size()); On 2015/10/29 01:03:19, aluebs-webrtc wrote: > On 2015/10/29 00:44:50, Andrew MacDonald wrote: > > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > > The APM has support to change this dynamically. Why not take advantage of > > this? > > > And for the reverse stream as well? > > > > We do take advantage of it. Whenever the stream format changes APM will record > > an initialization event *before* the stream event. We recreate the buffers > while > > handling the initialization event and therefore expect the channels to always > > match here. > > Oh, I was not aware that the INIT msg was required then. It's a little confusing, because an explicit user init call is not required to cause an init message. It will be recorded when APM initializes for any reason. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:149: std::memcpy(in_buf_->channels()[i], msg.input_channel(i).data(), On 2015/10/29 01:03:19, aluebs-webrtc wrote: > On 2015/10/29 00:44:50, Andrew MacDonald wrote: > > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > > Check the output of memcpy? And for the reverse stream as well? > > > > http://en.cppreference.com/w/cpp/string/byte/memcpy > > > > The return value is just |dest|. memcpy provides no safety. > > My whole life was a lie! :O :-) https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.h:54: ~ScopedTimer() { *interval_ += TickTime::Now() - start_time_; } On 2015/10/29 01:03:19, aluebs-webrtc wrote: > On 2015/10/29 00:44:50, Andrew MacDonald wrote: > > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > > Don't you think it is more intuitive to have a method to stop the time, > > instead > > > of letting the scope determine this? > > > > I find scoped objects quite intuitive (in webrtc, consider scoped_ptr, > > ScopedCriticalSection) but the main reason for doing this is to avoid making > > mistakes. You can't forget to stop the timer. > > But you can forget to scope it right. But if you find them intuitive, I don't > have a strong enough opinion to argue. That's true, but I think that's harder (given the name), and the worst-case damage is more limited (it will stop when execution leaves the enclosing function scope). https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:79: "FIXME(ajm): The transient suppression output is not dumped.\n"); On 2015/10/29 01:03:20, aluebs-webrtc wrote: > On 2015/10/29 00:44:50, Andrew MacDonald wrote: > > On 2015/10/24 00:53:34, aluebs-webrtc wrote: > > > What TS output? > > > > TS processes the reverse (aka render) stream. After my modification > audioproc_f > > doesn't dump the processed reverse stream. > > You mean the Intelligibility Enhancer? Yikes! Yes that's what I mean. Will fix. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.h:49: explicit ChannelBufferWavWriter(rtc::scoped_ptr<WavWriter> file); On 2015/10/29 01:03:20, aluebs-webrtc wrote: > On 2015/10/24 00:53:35, aluebs-webrtc wrote: > > Why does this needs to be passed in? Can't we just pass in the file name and > let > > ChannelBufferWavWriter create its own WavWriter? It could even create it > lazily > > and take the sample rate and number of channels from the ChannelBuffer. > > Missed this comment? I have the same question regarding the > ChannelBufferWavReader. I'm currently using this in a very specific context where we already have a WavWriter object, not a file name. If someone wanted CBWW/CBWR to take a file name in the future, that would be fine; just overload the constructor.
https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_file_processor.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_file_processor.cc:135: StreamConfig(msg.reverse_sample_rate(), msg.num_reverse_channels()); On 2015/10/29 01:14:33, Andrew MacDonald wrote: > On 2015/10/29 01:03:19, aluebs-webrtc wrote: > > I did actually intend to post this. I meant was, shouldn't it be: > > msg.reverse_sample_rate() ---> reverse_sample_rate > > For the case where !msg.has_reverse_sample_rate(). > > Aha, yes it should. Will fix. Done. https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:79: "FIXME(ajm): The transient suppression output is not dumped.\n"); On 2015/10/29 01:14:33, Andrew MacDonald wrote: > On 2015/10/29 01:03:20, aluebs-webrtc wrote: > > You mean the Intelligibility Enhancer? > > Yikes! Yes that's what I mean. Will fix. Done.
lgtm https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/test_utils.h (right): https://codereview.webrtc.org/1409943002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/test_utils.h:49: explicit ChannelBufferWavWriter(rtc::scoped_ptr<WavWriter> file); On 2015/10/29 01:14:33, Andrew MacDonald wrote: > On 2015/10/29 01:03:20, aluebs-webrtc wrote: > > On 2015/10/24 00:53:35, aluebs-webrtc wrote: > > > Why does this needs to be passed in? Can't we just pass in the file name and > > let > > > ChannelBufferWavWriter create its own WavWriter? It could even create it > > lazily > > > and take the sample rate and number of channels from the ChannelBuffer. > > > > Missed this comment? I have the same question regarding the > > ChannelBufferWavReader. > > I'm currently using this in a very specific context where we already have a > WavWriter object, not a file name. If someone wanted CBWW/CBWR to take a file > name in the future, that would be fine; just overload the constructor. Sounds reasonable. Agreed.
The CQ bit was checked by andrew@webrtc.org
The patchset sent to the CQ was uploaded after l-g-t-m from peah@webrtc.org Link to the patchset: https://codereview.webrtc.org/1409943002/#ps180001 (title: "Minor fixes.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1409943002/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1409943002/180001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: ios_arm64_dbg on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_arm64_dbg/builds/5266) ios_rel on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_rel/builds/9023) mac_compile_dbg on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/mac_compile_dbg/builds/798) mac_compile_x64_dbg on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/mac_compile_x64_dbg/bui...) mac_x64_gn_dbg on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/mac_x64_gn_dbg/builds/4930) mac_x64_rel on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/mac_x64_rel/builds/10146)
The CQ bit was checked by andrew@webrtc.org
The patchset sent to the CQ was uploaded after l-g-t-m from aluebs@webrtc.org, peah@webrtc.org Link to the patchset: https://codereview.webrtc.org/1409943002/#ps200001 (title: "Rebase.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1409943002/200001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1409943002/200001
Message was sent while issue was closed.
Committed patchset #8 (id:200001)
Message was sent while issue was closed.
Patchset 8 (id:??) landed as https://crrev.com/bdafe31b86e9819b0adb9041f87e6194b7422b08 Cr-Commit-Position: refs/heads/master@{#10460}
Message was sent while issue was closed.
FYI, this CL breaks building on iOS. See https://chromegw.corp.google.com/i/internal.client.webrtc/builders/iOS64%20De... for details.
Message was sent while issue was closed.
Description was changed from ========== Add aecdump support to audioproc_f. Add a new interface to abstract away file operations. This CL temporarily removes support for dumping the output of reverse streams. It will be easy to restore in the new framework, although we may decide to only allow it with the aecdump format. We also now require the user to specify the output format, rather than defaulting to the input format. TEST=Bit-exact output to the previous audioproc_f version using an input wav file, and to the legacy audioproc using an aecdump file. Committed: https://crrev.com/bdafe31b86e9819b0adb9041f87e6194b7422b08 Cr-Commit-Position: refs/heads/master@{#10460} ========== to ========== Add aecdump support to audioproc_f. Add a new interface to abstract away file operations. This CL temporarily removes support for dumping the output of reverse streams. It will be easy to restore in the new framework, although we may decide to only allow it with the aecdump format. We also now require the user to specify the output format, rather than defaulting to the input format. TEST=Bit-exact output to the previous audioproc_f version using an input wav file, and to the legacy audioproc using an aecdump file. Committed: https://crrev.com/bdafe31b86e9819b0adb9041f87e6194b7422b08 Cr-Commit-Position: refs/heads/master@{#10460} ==========
Message was sent while issue was closed.
On 2015/11/04 15:18:55, henrika_webrtc wrote: > FYI, this CL breaks building on iOS. > > See > https://chromegw.corp.google.com/i/internal.client.webrtc/builders/iOS64%20De... > for details. What's the deal with this internal builder? Why isn't it in the public waterfall (which would have resulted in knowing about this much sooner)?
Message was sent while issue was closed.
+kjellander in case he has any insight on this.
Message was sent while issue was closed.
On 2015/11/04 20:36:09, Andrew MacDonald wrote: > On 2015/11/04 15:18:55, henrika_webrtc wrote: > > FYI, this CL breaks building on iOS. > > > > See > > > https://chromegw.corp.google.com/i/internal.client.webrtc/builders/iOS64%20De... > > for details. > > What's the deal with this internal builder? Why isn't it in the public waterfall > (which would have resulted in knowing about this much sooner)? It has to do with the iOS devices in Chrome infra not being possible to have publicly due to something with licensing or similar (I don't remember). It's the same for all Chrome on iOS bots with devices. The right way to prevent this from happening is to remove the build_with_libjingle=1 variable from GYP_DEFINES of our public iOS bots, then we would have caught this on our trybots. I can't remove it right now, since the current build fails (see also https://codereview.webrtc.org/1419533010/).
Message was sent while issue was closed.
A revert of this CL (patchset #8 id:200001) has been created in https://codereview.webrtc.org/1411823003/ by kjellander@google.com. The reason for reverting is: This breaks iOS GYP generation as described on http://www.webrtc.org/native-code/ios I'm going to drive getting the build_with_libjingle=1 setting removed from the bots to match the official instructions. See https://code.google.com/p/webrtc/issues/detail?id=4653 for more context, as this is exactly what that issue tries to solve..
Message was sent while issue was closed.
A revert of this CL (patchset #8 id:200001) has been created in https://codereview.webrtc.org/1423693008/ by kjellander@webrtc.org. The reason for reverting is: This breaks iOS GYP generation as described on http://www.webrtc.org/native-code/ios I'm going to drive getting the build_with_libjingle=1 setting removed from the bots to match the official instructions. See https://code.google.com/p/webrtc/issues/detail?id=4653 for more context, as this is exactly what that issue tries to solve.. |