|
|
DescriptionReplace the stop_event_ in PlatformThread with an atomic flag
BUG=webrtc:7187
Review-Url: https://codereview.webrtc.org/2708433002
Cr-Commit-Position: refs/heads/master@{#16705}
Committed: https://chromium.googlesource.com/external/webrtc/+/82ead60076bbe74b47f2491a1af6e09c7641dec4
Patch Set 1 #Patch Set 2 : Update ProcessThreadImpl tests to not always return 0. Guessing it may be causing bot problems #
Total comments: 1
Patch Set 3 : Add sleep/yield + TODO #Patch Set 4 : Now hitting 'save' before committing the TODO #
Dependent Patchsets: Messages
Total messages: 31 (18 generated)
tommi@webrtc.org changed reviewers: + solenberg@webrtc.org
The CQ bit was checked by tommi@webrtc.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: presubmit on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/presubmit/builds/13680)
Update ProcessThreadImpl tests to not always return 0. Guessing it may be causing bot problems
Update ProcessThreadImpl tests to not always return 0. Guessing it may be causing bot problems
Patchset #2 (id:20001) has been deleted
https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... File webrtc/base/platform_thread.cc (right): https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... webrtc/base/platform_thread.cc:203: #else Do we want to give up the reminder of our time slice with e.g. sleep(0), pthread_yield_np() or sched_yield()? Unfortunately I don't know Linux/Posix well enough to make a proper recommendation in favor of one of the above...
On 2017/02/18 23:18:25, the sun wrote: > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > File webrtc/base/platform_thread.cc (right): > > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > webrtc/base/platform_thread.cc:203: #else > Do we want to give up the reminder of our time slice with e.g. sleep(0), > pthread_yield_np() or sched_yield()? > > Unfortunately I don't know Linux/Posix well enough to make a proper > recommendation in favor of one of the above... Also, create a bug describing the issue. Is it wise to run profiling before deciding to land this?
On 2017/02/18 23:34:19, the sun wrote: > On 2017/02/18 23:18:25, the sun wrote: > > > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > > File webrtc/base/platform_thread.cc (right): > > > > > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > > webrtc/base/platform_thread.cc:203: #else > > Do we want to give up the reminder of our time slice with e.g. sleep(0), > > pthread_yield_np() or sched_yield()? > > > > Unfortunately I don't know Linux/Posix well enough to make a proper > > recommendation in favor of one of the above... > > Also, create a bug describing the issue. Will do. > Is it wise to run profiling before deciding to land this? Yes, if I land the change in its current form, it will affect the performance tests. The linux_memcheck bot highlighted how the change could be disastrous if we have places in our production code that return 0 from TimeUntilNextProcess. The way the tests are run in that case doesn't seem to be a truly preemptive scheduler, but that's probably a good thing in this case. What I was wondering about was whether it was a good or bad thing to give up the time slice. Afaik, yield might do nothing if there aren't other threads that need the cpu, so we could still run into a busy loop. Maybe not having a sleep, a big enough footgun to not leave it lying around :) And then again, it would be good to know about those busy loop places if we have them, so maybe we could build some detection mechanism into PlatformThread that prints out warnings with the thread name when that happens? E.g. to pick an arbitrary example, if we get something like a 100 loops in a millisecond, print a warning and go to sleep? (tick count granularity of course makes this more complex in practice, but anyway, some sort of detection). For now it's probably best to add a sleep. Longer term, I wonder if having the loop in PlatformThread at all, is a good design choice. All use of a thread, will have to support some way of stopping and having a built in polling-loop by default, isn't a good default.
Add sleep/yield + TODO
The CQ bit was checked by tommi@webrtc.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: presubmit on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/presubmit/builds/13690)
Description was changed from ========== Replace the stop_event_ in PlatformThread with an atomic flag BUG= ========== to ========== Replace the stop_event_ in PlatformThread with an atomic flag BUG=webrtc:7187 ==========
Now hitting 'save' before committing the TODO
The CQ bit was checked by tommi@webrtc.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
On 2017/02/19 09:45:43, tommi (webrtc) wrote: > On 2017/02/18 23:34:19, the sun wrote: > > On 2017/02/18 23:18:25, the sun wrote: > > > > > > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > > > File webrtc/base/platform_thread.cc (right): > > > > > > > > > https://codereview.webrtc.org/2708433002/diff/40001/webrtc/base/platform_thre... > > > webrtc/base/platform_thread.cc:203: #else > > > Do we want to give up the reminder of our time slice with e.g. sleep(0), > > > pthread_yield_np() or sched_yield()? > > > > > > Unfortunately I don't know Linux/Posix well enough to make a proper > > > recommendation in favor of one of the above... > > > > Also, create a bug describing the issue. > > Will do. > > > Is it wise to run profiling before deciding to land this? > > Yes, if I land the change in its current form, it will affect the performance > tests. The linux_memcheck bot highlighted how the change could be disastrous if > we have places in our production code that return 0 from TimeUntilNextProcess. > The way the tests are run in that case doesn't seem to be a truly preemptive > scheduler, but that's probably a good thing in this case. > > What I was wondering about was whether it was a good or bad thing to give up the > time slice. Afaik, yield might do nothing if there aren't other threads that > need the cpu, so we could still run into a busy loop. Maybe not having a sleep, > a big enough footgun to not leave it lying around :) And then again, it would > be good to know about those busy loop places if we have them, so maybe we could > build some detection mechanism into PlatformThread that prints out warnings with > the thread name when that happens? E.g. to pick an arbitrary example, if we get > something like a 100 loops in a millisecond, print a warning and go to sleep? > (tick count granularity of course makes this more complex in practice, but > anyway, some sort of detection). > > For now it's probably best to add a sleep. Longer term, I wonder if having the > loop in PlatformThread at all, is a good design choice. All use of a thread, > will have to support some way of stopping and having a built in polling-loop by > default, isn't a good default. SGTM - I thought it confusing when I first encountered PlatformThread. By the name it sounds like an abstraction of platform specific thread APIs, and every such I've dealt with just runs a function to termination in a separate thread. No loop. Also, the ProcessThread/PlatformThread relationship is a bit confusing, so it sounds like a good idea to remove the looping from PlatformThread and keep that behavior in ProcessThread (it's a common construct after all). As an aside, IIUC with the current impl, ProcessThreadImpl::Process() will wait on an event (deschedule) before it returns to PlatformThread::Run(), where it will immediately deschedule itself with another wait/sleep. I bet the scheduler is not really in need of that extra exercise.
lgtm - Do you "know" if nanosleep() will actually give up time slice (docs I find look ambiguous)? - I can help you run profiling tomorrow to see whether this has the anticipated effect.
On 2017/02/19 23:28:49, the sun wrote: > lgtm > > - Do you "know" if nanosleep() will actually give up time slice (docs I find > look ambiguous)? > - I can help you run profiling tomorrow to see whether this has the anticipated > effect. iirc nanosleep() should always do that whereas yield may not - except on Mac, where it does. :) However, it's been a long time since I was reading about that (and while I have done experiments, I haven't spent a lot of time verifying this). Profiling tomorrow sounds great :)
The CQ bit was checked by tommi@webrtc.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
CQ is committing da patch. Bot data: {"patchset_id": 80001, "attempt_start_ts": 1487549219271800, "parent_rev": "d95c48a810e8685127a9a02ce9a66d7f3e5027d7", "commit_rev": "82ead60076bbe74b47f2491a1af6e09c7641dec4"}
Message was sent while issue was closed.
Description was changed from ========== Replace the stop_event_ in PlatformThread with an atomic flag BUG=webrtc:7187 ========== to ========== Replace the stop_event_ in PlatformThread with an atomic flag BUG=webrtc:7187 Review-Url: https://codereview.webrtc.org/2708433002 Cr-Commit-Position: refs/heads/master@{#16705} Committed: https://chromium.googlesource.com/external/webrtc/+/82ead60076bbe74b47f2491a1... ==========
Message was sent while issue was closed.
Committed patchset #4 (id:80001) as https://chromium.googlesource.com/external/webrtc/+/82ead60076bbe74b47f2491a1... |