|
|
Created:
5 years, 1 month ago by sprang_webrtc Modified:
5 years, 1 month ago CC:
webrtc-reviews_webrtc.org, yujie_mao (webrtc), stefan-webrtc, tterriberry_mozilla.com, mflodman, the sun, perkj_webrtc, andresp Base URL:
https://chromium.googlesource.com/external/webrtc.git@master Target Ref:
refs/pending/heads/master Project:
webrtc Visibility:
Public. |
DescriptionAllow pacer to boost bitrate in order to meet time constraints.
Currently we limit the enocder so that frames aren't encoded if the
expected pacer queue is longer than 2s. However, if the queue is full
and the bitrate suddenly drops (or there is a large overshoot), the
queue time can be long than the limit.
This CL allows the pacer to temporarily boost the pacing bitrate over
the 2s window.
BUG=
Committed: https://crrev.com/0a43fef6dc8ce95a3ec52921870e08799ee9a250
Cr-Commit-Position: refs/heads/master@{#10729}
Patch Set 1 #
Total comments: 10
Patch Set 2 : Addressed comment #Patch Set 3 : Use average queue time and size determine boost rate #
Total comments: 6
Patch Set 4 : Addressed comments #Patch Set 5 : Cast #
Messages
Total messages: 21 (5 generated)
sprang@webrtc.org changed reviewers: + mflodman@webrtc.org, stefan@webrtc.org
https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:328: if (bitrate_boost_start_us_ != -1 && Is the bitrate_boost_start_us_ really needed? Can't we compute a new required_bitrate_kbps every time we call Process() as long as the expected_queue_length_ms is greater than the max queue length? Maybe even go as far as always doing: media_budget_->set_target_rate_kbps(std::max(required_bitrate_kbps, max_bitrate_kbps_)); If the expected_queue_length_ms goes below kMaxQueueLengthMs, required_bitrate_kbps will automatically be less than max_bitrate_kbps_, and we will automatically be back to normal. https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:336: static_cast<int64_t>(packets_->SizeInBytes() * 8 / max_bitrate_kbps_); Call ExpectedQueueTimeMs() instead. https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender_unittest.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender_unittest.cc:670: static_cast<int64_t>(kPacketSize * 8 / kMaxBitrate)); EXPECT_NEAR(duration, PacedSender::.., kPacketSize*8 / kMaxBitrate);
https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:328: if (bitrate_boost_start_us_ != -1 && On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > Is the bitrate_boost_start_us_ really needed? Can't we compute a new > required_bitrate_kbps every time we call Process() as long as the > expected_queue_length_ms is greater than the max queue length? Maybe even go as > far as always doing: > > media_budget_->set_target_rate_kbps(std::max(required_bitrate_kbps, > max_bitrate_kbps_)); > > If the expected_queue_length_ms goes below kMaxQueueLengthMs, > required_bitrate_kbps will automatically be less than max_bitrate_kbps_, and we > will automatically be back to normal. This was how I first implemented it, and it turns out it doesn't work (if we actually want a packet that enters the queue to be out two seconds later). The reason is that for each Process() call we calculate how fast we need to go to get everything currently in the queue out before two seconds from _now_, not from when the packets entered the queue. A simplified example: We have 1000 bytes in the queue. Transmit rate is 250 bytes per second. In order to get everything out in two seconds we need 500 bytes per second transmit rate, so use that. One second later we call Process() again, we have transmitted half the queue so have 500 bytes left. Since the transmit rate is 250 bytes per second is sufficient to send 500 bytes in one seconds we use that for the remainder of the queue. The result is that the data in the queue took three seconds to transmit. Calling Process() more frequently can make this behavior worse... https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:336: static_cast<int64_t>(packets_->SizeInBytes() * 8 / max_bitrate_kbps_); On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > Call ExpectedQueueTimeMs() instead. Sure. Or do we care about recursively taking locks? https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender_unittest.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender_unittest.cc:670: static_cast<int64_t>(kPacketSize * 8 / kMaxBitrate)); On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > EXPECT_NEAR(duration, PacedSender::.., kPacketSize*8 / kMaxBitrate); Done.
https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:328: if (bitrate_boost_start_us_ != -1 && On 2015/11/02 12:17:56, språng wrote: > On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > > Is the bitrate_boost_start_us_ really needed? Can't we compute a new > > required_bitrate_kbps every time we call Process() as long as the > > expected_queue_length_ms is greater than the max queue length? Maybe even go > as > > far as always doing: > > > > media_budget_->set_target_rate_kbps(std::max(required_bitrate_kbps, > > max_bitrate_kbps_)); > > > > If the expected_queue_length_ms goes below kMaxQueueLengthMs, > > required_bitrate_kbps will automatically be less than max_bitrate_kbps_, and > we > > will automatically be back to normal. > > This was how I first implemented it, and it turns out it doesn't work (if we > actually want a packet that enters the queue to be out two seconds later). The > reason is that for each Process() call we calculate how fast we need to go to > get everything currently in the queue out before two seconds from _now_, not > from when the packets entered the queue. > > A simplified example: > > We have 1000 bytes in the queue. Transmit rate is 250 bytes per second. In order > to get everything out in two seconds we need 500 bytes per second transmit rate, > so use that. > > One second later we call Process() again, we have transmitted half the queue so > have 500 bytes left. Since the transmit rate is 250 bytes per second is > sufficient to send 500 bytes in one seconds we use that for the remainder of the > queue. This isn't correct I think. Those 500 bytes have been in the queue for one second already, so therefore they must be sent within the next second. > > The result is that the data in the queue took three seconds to transmit. Calling > Process() more frequently can make this behavior worse... I do understand the issue related to this based on the offline discussion. There are two events affecting the boosted pacing rate, packets being added or removed from the queue. If we know we always have a pacing rate which is at least high enough for the queue to empty within 2 seconds, shouldn't we then be able to modify the rate every time a packet is added or removed? Say if we have 1000 bytes packets in there, as in your example and the boosted rate is 500 bytes/second (while the actual rate is 250 bytes/sec). Then we add one packet 100 ms after the last packet in the queue was added, shouldn't we then be able to update the boosted rate with the new packet given that we know how long the estimated queue currently is? For instance: t=0: 500 bytes added, rate=500/2 = 250 bytes/s, q=2 sec t=0.1: 500 bytes added, rate=(rate*q + 500) / (q+0.1) = 476 bytes/sec, q=2.1 sec and so on... Does this make sense? I will still blame my jetlag if it doesn't. :P https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:336: static_cast<int64_t>(packets_->SizeInBytes() * 8 / max_bitrate_kbps_); On 2015/11/02 12:17:56, språng wrote: > On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > > Call ExpectedQueueTimeMs() instead. > > Sure. Or do we care about recursively taking locks? I would prefer to not take them recursively. Just create a private version of the method which requires the lock to be acquired. Maybe name it Internal
https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:328: if (bitrate_boost_start_us_ != -1 && On 2015/11/10 14:57:02, stefan-webrtc (holmer) wrote: > On 2015/11/02 12:17:56, språng wrote: > > On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > > > Is the bitrate_boost_start_us_ really needed? Can't we compute a new > > > required_bitrate_kbps every time we call Process() as long as the > > > expected_queue_length_ms is greater than the max queue length? Maybe even go > > as > > > far as always doing: > > > > > > media_budget_->set_target_rate_kbps(std::max(required_bitrate_kbps, > > > max_bitrate_kbps_)); > > > > > > If the expected_queue_length_ms goes below kMaxQueueLengthMs, > > > required_bitrate_kbps will automatically be less than max_bitrate_kbps_, and > > we > > > will automatically be back to normal. > > > > This was how I first implemented it, and it turns out it doesn't work (if we > > actually want a packet that enters the queue to be out two seconds later). The > > reason is that for each Process() call we calculate how fast we need to go to > > get everything currently in the queue out before two seconds from _now_, not > > from when the packets entered the queue. > > > > A simplified example: > > > > We have 1000 bytes in the queue. Transmit rate is 250 bytes per second. In > order > > to get everything out in two seconds we need 500 bytes per second transmit > rate, > > so use that. > > > > One second later we call Process() again, we have transmitted half the queue > so > > have 500 bytes left. Since the transmit rate is 250 bytes per second is > > sufficient to send 500 bytes in one seconds we use that for the remainder of > the > > queue. > > This isn't correct I think. Those 500 bytes have been in the queue for one > second already, so therefore they must be sent within the next second. > > > > > The result is that the data in the queue took three seconds to transmit. > Calling > > Process() more frequently can make this behavior worse... > > I do understand the issue related to this based on the offline discussion. > > There are two events affecting the boosted pacing rate, packets being added or > removed from the queue. If we know we always have a pacing rate which is at > least high enough for the queue to empty within 2 seconds, shouldn't we then be > able to modify the rate every time a packet is added or removed? > > Say if we have 1000 bytes packets in there, as in your example and the boosted > rate is 500 bytes/second (while the actual rate is 250 bytes/sec). Then we add > one packet 100 ms after the last packet in the queue was added, shouldn't we > then be able to update the boosted rate with the new packet given that we know > how long the estimated queue currently is? For instance: > > t=0: 500 bytes added, rate=500/2 = 250 bytes/s, q=2 sec > t=0.1: 500 bytes added, rate=(rate*q + 500) / (q+0.1) = 476 bytes/sec, q=2.1 sec > and so on... > > Does this make sense? I will still blame my jetlag if it doesn't. :P And something similar could be done when a packet is removed from the queue... and we have to make sure the rate never goes below max_bitrate_kbps_.
https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/1/webrtc/modules/pacing/paced_s... webrtc/modules/pacing/paced_sender.cc:328: if (bitrate_boost_start_us_ != -1 && On 2015/11/10 14:59:44, stefan-webrtc (holmer) wrote: > On 2015/11/10 14:57:02, stefan-webrtc (holmer) wrote: > > On 2015/11/02 12:17:56, språng wrote: > > > On 2015/10/28 15:45:23, stefan-webrtc (holmer) wrote: > > > > Is the bitrate_boost_start_us_ really needed? Can't we compute a new > > > > required_bitrate_kbps every time we call Process() as long as the > > > > expected_queue_length_ms is greater than the max queue length? Maybe even > go > > > as > > > > far as always doing: > > > > > > > > media_budget_->set_target_rate_kbps(std::max(required_bitrate_kbps, > > > > max_bitrate_kbps_)); > > > > > > > > If the expected_queue_length_ms goes below kMaxQueueLengthMs, > > > > required_bitrate_kbps will automatically be less than max_bitrate_kbps_, > and > > > we > > > > will automatically be back to normal. > > > > > > This was how I first implemented it, and it turns out it doesn't work (if we > > > actually want a packet that enters the queue to be out two seconds later). > The > > > reason is that for each Process() call we calculate how fast we need to go > to > > > get everything currently in the queue out before two seconds from _now_, not > > > from when the packets entered the queue. > > > > > > A simplified example: > > > > > > We have 1000 bytes in the queue. Transmit rate is 250 bytes per second. In > > order > > > to get everything out in two seconds we need 500 bytes per second transmit > > rate, > > > so use that. > > > > > > One second later we call Process() again, we have transmitted half the queue > > so > > > have 500 bytes left. Since the transmit rate is 250 bytes per second is > > > sufficient to send 500 bytes in one seconds we use that for the remainder of > > the > > > queue. > > > > This isn't correct I think. Those 500 bytes have been in the queue for one > > second already, so therefore they must be sent within the next second. > > > > > > > > The result is that the data in the queue took three seconds to transmit. > > Calling > > > Process() more frequently can make this behavior worse... > > > > I do understand the issue related to this based on the offline discussion. > > > > There are two events affecting the boosted pacing rate, packets being added or > > removed from the queue. If we know we always have a pacing rate which is at > > least high enough for the queue to empty within 2 seconds, shouldn't we then > be > > able to modify the rate every time a packet is added or removed? > > > > Say if we have 1000 bytes packets in there, as in your example and the boosted > > rate is 500 bytes/second (while the actual rate is 250 bytes/sec). Then we add > > one packet 100 ms after the last packet in the queue was added, shouldn't we > > then be able to update the boosted rate with the new packet given that we know > > how long the estimated queue currently is? For instance: > > > > t=0: 500 bytes added, rate=500/2 = 250 bytes/s, q=2 sec > > t=0.1: 500 bytes added, rate=(rate*q + 500) / (q+0.1) = 476 bytes/sec, q=2.1 > sec > > and so on... > > > > Does this make sense? I will still blame my jetlag if it doesn't. :P > > And something similar could be done when a packet is removed from the queue... > and we have to make sure the rate never goes below max_bitrate_kbps_. This method won't work quite right either, since we can't weight bitrate needed. Also, the method we discussed offline also won't work, since it again will require going through the list of packets and update min send bitrate if the actual send bitrate exceeds the min needed (or we risk significantly overshooting the rate needed if bitrate drops). In lieu of this I've implemented a simpler approach which also isn't perfectly accurate but it is quite cheap and should be good enough. Basically we track the average queue time of the packets in the pacer. Assuming (among other things) equal input and output rates, we have 2s - avg_queue_time to send the full size of the queue. If we disregard input rate, queue size / 2 would be more accurate, but I'm not sure about that assumption... WDYT?
The CQ bit was checked by sprang@webrtc.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1412293003/40001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1412293003/40001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: ios_dbg on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_dbg/builds/10847) mac_x64_gn_rel on tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/mac_x64_gn_rel/builds/5298)
This looks like a good solution. lgtm https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:140: if (now > time_last_updated_) { If these are equal the delta will be zero and the operations below will be a noop. I think you can remove this if-statement. Or did you simply want to avoid the computation?
https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:120: queue_time_sum_ -= (clock_->TimeInMilliseconds() - packet.enqueue_time_ms); Can this cause queue_time_sum_ to become negative?
https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:120: queue_time_sum_ -= (clock_->TimeInMilliseconds() - packet.enqueue_time_ms); I'm not sure clock_->TimeInMilliseconds() is the right thing to remove here, I think it should rather be time_last_updated_. Otherwise we will subtract more here than we add to queue_time_sum_ and this might in the end be a negative value. https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:140: if (now > time_last_updated_) { On 2015/11/19 17:13:46, stefan-webrtc (holmer) wrote: > If these are equal the delta will be zero and the operations below will be a > noop. I think you can remove this if-statement. > > Or did you simply want to avoid the computation? There is also already a check in Process() preventing this to happen.
https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... File webrtc/modules/pacing/paced_sender.cc (right): https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:120: queue_time_sum_ -= (clock_->TimeInMilliseconds() - packet.enqueue_time_ms); On 2015/11/19 18:58:55, mflodman wrote: > I'm not sure clock_->TimeInMilliseconds() is the right thing to remove here, I > think it should rather be time_last_updated_. > > Otherwise we will subtract more here than we add to queue_time_sum_ and this > might in the end be a negative value. Done. https://codereview.webrtc.org/1412293003/diff/40001/webrtc/modules/pacing/pac... webrtc/modules/pacing/paced_sender.cc:140: if (now > time_last_updated_) { On 2015/11/19 17:13:46, stefan-webrtc (holmer) wrote: > If these are equal the delta will be zero and the operations below will be a > noop. I think you can remove this if-statement. > > Or did you simply want to avoid the computation? It was mostly to avoid unnecessary execution. But as pointed out this won't happen due to check in Process(), so removing it.
LGTM
The CQ bit was checked by sprang@webrtc.org
The patchset sent to the CQ was uploaded after l-g-t-m from stefan@webrtc.org Link to the patchset: https://codereview.webrtc.org/1412293003/#ps80001 (title: "Cast")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1412293003/80001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1412293003/80001
Message was sent while issue was closed.
Committed patchset #5 (id:80001)
Message was sent while issue was closed.
Patchset 5 (id:??) landed as https://crrev.com/0a43fef6dc8ce95a3ec52921870e08799ee9a250 Cr-Commit-Position: refs/heads/master@{#10729} |