#tokio-docs
452 messages · Page 1 of 1 (latest)
yep
That is so brilliant.
Thank you! I don't know why I couldn't seem to find a straight answer to it. I appreciate it. I'm taking inspiration from tokio with regard to properly documenting my library. Now it makes sense why you have all the ...cfg... derive macros or what not for feature gates
Errrr, what I mean is now it all makes sense. I know that those macros are not the same thing. Just saying, I understand how everything is related now. Thanks again!
@dull sail Note that this also requires some stuff in your Cargo.toml and lib.rs if you want to use it.
Right. I felt like I was just missing a single piece to a puzzle. I'll make sure that I take a closer look and really understand everything.
You are the best 🙂
yes
they are the same
it waits a certain amount of time
it’s not very complicated
iirc, and i probably don’t, tokio just has a thread pool of native threads
Sleep is “green thread” it is very low overhead
@lethal berry https://github.com/tokio-rs/website/issues/658
I guess it's just missing tokio::spawn 
yeah I suppose, but like its just an example. Its not meant to be real
I replied
well you did make a point and not show in the code 😛
I guess yeah but need to motivate using a future somehow 😅
does tokio use linux's epoll somewhere?
Yes. That happens inside the mio crate.
Read this for some context: https://ryhl.io/blog/async-what-is-blocking/
Tokio uses mio to determine which task to swap to.
what advantage one gets by using tokio::sync::Mutex over std::sync::Mutex?
What questions do you have re: Mutex after reading this: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use
how expensive is tokio::spawn? is it okay to use 1000 tasks at once?
definitely
tokio::spawn is like, one heap allocation and a couple atomic ops
It's not expensive at all. In general, the number of tasks is not what matters, because the tasks that are idle do not consume any compute resources.
what if none of them are idle?
Well, if they are not idle, then they take up whatever resources it takes to run the code you put in them.
understandable thx
I've written a draft of a small note warning about a footgun in fs, thought it might be best to get some opinions here first before I send the PR:
//! **Warning**: These adapters may create a large number of temporary tasks,
//! especially when reading large files. When performing a lot of operations
//! in one batch, it may be significantly faster to use [`spawn_blocking`]
//! directly:
//!
//! `\``
//! # #[tokio::main]
//! # async fn main() {
//! use std::io::{BufReader, BufRead};
//! let line_count = tokio::task::spawn_blocking(move || -> Result<usize, std::io::Error> {
//! let file = std::fs::File::open("/usr/share/dict/words")?;
//! let line_count = BufReader::new(file).lines().count();
//! Ok(line_count)
//! }).await?;
//! # }
gah discord's text formatting
I'm not hugely pleased about hardcoding a filepath that might not be there on all platforms, but I'm not quite sure what else to put there that would work.
It might be worth adding a section that explains how fs is implemented, and then put that example at the end.
You can make the example define a function that takes the path as argument.
I had intended it go under the paragraph that sort of does that on the fs module overview
that works I guess, I was just not sure if you'd appreciate the code not actually being run in doctests. But I suppose this is simple enough to be okay if it just compiles
Meh. It's fine. If you want, you can add a hidden main function 🤷♀️
mh, not sure if it's worth making them work on all platforms, if compile test is fine I'll just do that.
Fine with me.
https://tokio.rs/tokio/tutorial/channels
The first example,
tx.send("sending from first handle").await
_ = tx2.send("sending from second handle").await;
would fix the compiler's warnning
I apologize for the presentation..
Me and a friend use tokio already for a long time and know to "never block in an async context/runtime". So when we talk with newcomers to Rust and async/await, we tell them so too. But we don't just want to tell "don't do it", we'd like to say "look at this article, site. This shows what happens if you do".
We were unable to find an up-to-date and precise article that does exactly that, for easy sharing. I remember things like the current thread might get taken completely off the runtime and any pending tasks might get stuck with it, never to recover. And similar things like that, maybe from some older recordings of Jon Gjengset (pre tokio 1.0).
So would just love to know if there is a nice article out there that explores what would actually happen (or can happen) if you execute blocking calls on the tokio runtime. Best if it even even explores whether it might be okay to block for shorter times, and what happens a long blocking call completes (is the runtime able to recover and what not).
If anybody has a good article on it, please let me know.
Thank you. I know and read that article. But I feel it's not fully covering what bad things might happen.
This can be a major issue because it means that other tasks on the same runtime will stop running until the thread is no longer being blocked.
Which means technically, all async tasks on the thread are halted until the blocking call finishes. But the following sample after that statement shows it in such a simple example, that it doesn't seem to break anything, just running things in serial instead of in parallel.
That is obviously not good, but not too bad either, is it? (as the execution of async tasks continues after the blocking call)
The default Tokio runtime spawns one thread per CPU core, and you will typically have around 8 CPU cores. This is enough that you can miss the issue when testing locally, but sufficiently few that you will very quickly run out of threads when running the code for real.
So I guess the worst case is, if you have all async threads blocking on a long running function call, it halts all async tasks. But again, wouldn't it recover after the blocking calls finish?
How about this one then? https://github.com/tokio-rs/tokio/issues/4730
However, in general, if something is blocking but then stops blocking, then the runtime will recover.
You don't end up with problems forever unless something blocks forever.
Thank you, that's a really interesting issue. Also, learned that the tokio runtime has a single "IO owner thread". Didn't know that.
Thank yo so much for your time. Very much appreciated 🙇
Note that the IO driver can be moved between worker threads, but there's always just one of them who does that.
So in the issue, all tasks halt, because we were unlucky that the blocking thread was the one with the IO driver. Hence, all other threads don't wake up until the blocking call finishes, so the thread with the IO driver can notify the other threads again?
The other threads don't wake up because although new events would be available if you polled the IO driver, nobody is polling the IO driver.
Oh, because only one thread is responsible for polling the IO driver, and that one is busy with a blocking call 💡
This also shows that it does recover if you stop blocking - then it will be able to poll the IO driver.
Definitely really bad for a web server to block, as the whole request processing might halt.
In case of CLI apps, it seems to not be too bad if a few blocking calls happen here and there? I mean, it always depends, but the typical CLI app, where it runs a single command and shuts down, or even some dashboard CLI might just freeze for a moment (unless you have extremely long blocking calls of course)
I guess, but that's because there isn't much reason to use async for CLI apps.
For file I/O bound tasks yes, but cli tools like link checkers or load testers, there's surely a benefit in using async.
Or, in our case:. A newcomer to Rust, that grabbed reqwest as it's in the top results for http clients, without understanding the whole async vs blocking in Rust yet. For his simple cli app a blocking http client would have been more than sufficient, but he didn't know.
He mixed that with blocking reads from stdin, so we told him to better not do that. As it turns out now, in his case it wouldn't really have mattered, but probably good to steer towards the mindset of never do blocking calls in async in general 😅
async is also oftentimes a much better paradigm for CLIs that have to manage a bunch of child processes…
not necessarily because of performance, but because you might want things like cancellation/timeouts, processing output from child processes in the order they complete, etc
Hi! The UnixStream docs say the socket can be "accepted from a listener with UnixListener::incoming" [1] but no method incoming exists on UnixListener [2]
This is confusing, any suggestions for what they should say instead?
[1] https://docs.rs/tokio/latest/tokio/net/struct.UnixStream.html
[2] https://docs.rs/tokio/latest/tokio/net/struct.UnixListener.html
accept?
@errant star @static nexus I know you both have wonderful blogs that you link to in here pretty often to great effect, would it maybe make sense to mirror some of that content on tokio.rs?
Maybe not in blog form but translated with credit into a tutorial/guide page?
Sure, I'd be happy to. I haven't actually written anything about Tokio directly, mostly about tonic and axum, not sure if that belongs on tokio.rs?
I have pondered doing so before, but haven't taken the time to do it. I don't mind doing so.
You may note that the Tokio tutorial and related pages are under a path called /tokio/. This structure was copied from the previous website, and I believe it also contained info for other Tokio projects under similar paths for them.
Axum in particular is certainly a Tokio project — it's under the Tokio github org.
Ok cool. I think I'm gonna do a second extensions post with more examples taken from discussions here. Maybe after I'll clean it up and open a PR to add on Tokio.
Hi, is the docs regarding tokio::time::Sleep not being Unpin outdated?
It is not Unpin.
pin_project seems to generate Unpin impl
It looks bad in the documentation, but the where bound is false so it doesn't apply.
okay, it's just confusing on first glance.
Hi - I am a new Tokio user.
I am reading the tutorial, and I feel that I am more confused than I should be. It could just be that I am new to Tokio, but it feels like the tutorial is mixing up the introduction of new topics and the code that we need to write in the tutorial. It is not clear when or where code needs to be created.
Am I the only new user that feels like this, or is there room for improvement?
I agree, And actually I think don't need mini redis.
Are you referring to my question about the tutorial?
yes~
The more I read, the more confusing everything is. For example, on this page https://tokio.rs/tokio/tutorial/framing there is a heading called "The Buf trait", but there is nothing in the text that shows an implementation of the Buf trait or any other trait.
When adding the read_frame method on the Connection struct, the code is missing the impl Connection statement.
Rust is a really hard language to learn, and async/await is also tough to grasp. It is important that we keep the documentation clean.
Coincidentally I am reading that same page for the very first time. I don't find it particularly confusing. I think the doc is good enough as long as I can understand what a Buftrait is in principle and that I can reconstruct my own implementation by reading the doc. Seeing the code for Buf trait would then be a confirmation/verification of my understanding. My 2 cents.
On https://tokio.rs/tokio/tutorial/async, the futures crate is used under the Mini Tokio section before it was later introduced (ie added as a dependency futures = "0.3") under the Updating Mini Tokio section. Is this intentional?
hey guys, I have some question about the tokio::spawn, the docs say tokio::spawn(async{}) will create a task , and tokio::select! run on the same task, then how to explain this example?
use tokio::sync::oneshot;
#[tokio::main]
async fn main() {
let (tx1, rx1) = oneshot::channel();
let (tx2, rx2) = oneshot::channel();
tokio::spawn(async {
let _ = tx1.send("one");
});
tokio::spawn(async {
let _ = tx2.send("two");
});
tokio::select! {
val = rx1 => {
println!("rx1 completed first with {:?}", val);
}
val = rx2 => {
println!("rx2 completed first with {:?}", val);
}
}
}
doesn't it run on two taks?
Yeah, from my understanding the channels do run on different tasks but select will only use the response from the first one that finishes.
the code inside of the select! runs in one task (in this case, the main task), while the code in each of the two async blocks you passed to spawn will run in their own separate tasks
@plucky bloom so you mean in main task, you can spawn multi tasks, select! will collect all spawned tasks in the main task?
if so, I think this part needs a more precise description, not task and task , it's very confusing
select! isn't "collecting" anything, no
you can only run async code in Tokio if it's running as a "task": this is just the equivalent of a regular OS thread.
for the above code, the #[tokio::main] macro is doing something that you can conceptually think of as:
tokio::spawn(async {
let (tx1, rx1) = oneshot::channel();
let (tx2, rx2) = oneshot::channel();
...
});
so that's the first task, as Eliza mentioned: the "main" task. then there's the two tasks being spawned via tokio::spawn. all three tasks are separate.
select! is a macro for trying to await multiple futures and doing something with whichever future, out of all of the futures being select!-ed, completes first. since that code is in in main(), it runs on the "main" task.
Good morning, all.. I have a quick question regarding a snippet included in the tutorial documentation. Is this the proper channel for that type of question?
You can ask here. #tokio-users is also ok
Thank you, @errant star ! I have resolved what I was planning to ask. This is tutorial is so well thought out, learning lots a long the way
white list?
?
hi, I did some cool work, but I need someone to help pass these checks https://github.com/tokio-rs/website/pull/688
I started the CI checks
@errant starHow about adding a search bar to tokio.rs?
It's not clear to me that there's a good way of doing that.
Many open source projects use the algoria search api for their static documentation sites, witch is free for open source projects
For example: https://beta.reactjs.org/ https://vuejs.org/
I'm not sure if tokio.rs needs a search feat, if you think it does but don't know what way I can try to investigate@errant star
If there's a good way of adding it, then I don't mind
It used to have a search feature provided by a third party service
I think it was this https://www.algolia.com/
Hi. Is there a new good book covering Axum in detail? Paid books are ok too
Don’t think so
So there's this link to a blog on the tokio website in the code which seems to be dead: https://github.com/tokio-rs/tokio-core/blob/fdba3f18370c67ec0c99c119157ad8e25be99fd9/src/reactor/poll_evented.rs#L45 Just wondering where it was moved to or if it still exists
Oh found it https://github.com/jonhoo/tokio-website/blob/master/content/legacy/going-deeper-tokio/core-low-level.md , wondering if the link should be updated or just removed 
How does one reconnect a tokio-tungstenite websocket connection? Say it's already connected but I need it to be reconnected, gracefully or not?
Documentation and examples are good enough, trust me
I'm a little bit confused by https://docs.rs/tokio/latest/tokio/sync/mpsc/index.html - which says "The bounded variant has a limit [...] if this limit is reached, trying to send another message will wait until a message is received from the channel." - it sounds like Sender will block when the channel is full.
But then at the end of the next paragraph, it says "If the bounded channel is at capacity, the send is rejected and the task will be notified when additional capacity is available. In other words, the channel provides backpressure." - which sounds like it doesn't block.
Is it just that I'm not thinking in terms of Tokio "asynchronous tasks", and that what the second paragraph means with "the send is rejected" is that the "asynchronous task" is blocked and later notified to retry the send? Such that the calling code does indeed block, this "reject & later notify" being just the implementation detail of what's happening behind the scenes for the blocked send?
Hmm. I guess this became more a "noob" question than a "docs" question. 🙂
"the send is rejected" is probably inaccurate, even at the implementation level
anyway, the send method will block until it is able to send the message
Finally green: https://github.com/tokio-rs/tokio/pull/5823
I will need to figure out how to increase max permutations though
Thanks... I suppose bringing it up here isn't really the ideal way to request documentation improvements. 🙂
Well, one option is to ask questions here, learn, then submit a doc PR 🙂
I think I need to learn more about the implementation to be able to feel confident with doc PRs. I'll see if I can get to it.
I have a question about the following (module doc of tokio::fs):
Tasks run by worker threads should not block, as this could delay servicing reactor events. Portable filesystem operations are blocking, however. This module offers adapters which use a blocking annotation to inform the runtime that a blocking operation is required. When necessary, this allows the runtime to convert the current thread from a worker to a backup thread, where blocking is acceptable.
Is there a way to tell/know which operations will block (e. g. spawn a backup thread) on which platforms?
All async functions and methods in tokio::fs use spawn_blocking.
They all use blocking fs internally? 🤐 I had thought they would use epoll on Linux and Kqueue on Macos which are, IIRC, block-free
Epoll and kqueue don't support normal files.
They just claim that the file is always ready even if the operation would block.
I see, thanks for the info
you can use kqueue with aio to do some fs operations on bsd, but that has other restrictions and difficulties
(mac is bsd)
So on platforms without io_uring, blocking file IO is as good as you're gonna get?
Tokio doesn't use io_uring for tokio::fs, even if it is available.
No, but then it's possible to switch to using tokio_uring I mean
remind me to do a draft PR on this if I don't get to it in the next two weeks
nice!
So https://docs.rs/tokio-macros/latest/tokio_macros/attr.test.html the builder and runtime links are broken. I tried to fix them via changing to [`tokio::runtime::Runtime`] following the docs from tokio-util but that only seems to work if the item is a dependency and doesn't work for dev-dependencies. Also, doing a [target.'cfg(doc)'.dependencies] and adding tokio as a dependency for docs won't work because of circular dependency issues 
The links work when you view them in the tokio crate.
hmm seems a bit of a UX pain when I google for the tokio main macro docs and the macro crate docs page is the first result 
wonder if there's a solution that works for both 
you could do some hacks with #[cfg_attr(docsrs, doc = "....")] maybe
hello,firends。can i use egui in tokio?
If you mean using tokio within an egui application, yes it's possible. There's a few ways to go about it but one of them is to pass a Runtime to your application, spawn tasks for async work and communicate back to the main thread through channels.
that has been very helpful for me,thanks
Further reading incase you need it:
Sorry I didn’t realize how long ago this was posted
Hello!
I want to work on this issue #6263 ~
I wrote a quick document describing my thought process before I start implementing anything. I also prepared some questions in advance to better understand the scope of work
It would be great if someone could review, and perhaps answer those questions~
Thank you. I'm sorry I just saw it. It's been a long time since I've been online.
I've been trying to understand codecs and how to use them. I started at https://docs.rs/tokio-util/latest/tokio_util/codec/index.html, which got me thinking that what I need is either a LinesCodec or LengthDelimitedCodec (I'm just playing around so doesn't really matter which one) and a FramedRead to use it with a stream. I followed through to https://docs.rs/tokio-util/latest/tokio_util/codec/length_delimited/index.html which shows me how to configure a decoder, and https://docs.rs/tokio-util/latest/tokio_util/codec/struct.FramedRead.html which tells me what FramedRead has.
But I still haven't really seen any concrete example that ties everything together and shows me exactly how to read a frame from a stream. I'm gathering that I actually need tokio_stream::StreamExt so I can do .next() on my FramedRead. This does not seem to be explained anywhere in the docs I've seen so far, and I only figured it out looking at https://users.rust-lang.org/t/understanding-framed-in-tokio/42449/2
Am I missing something obvious? This seems like such a fantastic feature to have, but nothing in the docs (at least whatever I've seen so far) seem to really help me to become successful.
No, you're not missing anything. If you want to open a documentation bug on the Tokio repo, that would be great!
The front page of Bytes docs (https://docs.rs/bytes/latest/bytes/) have Buf and BufMut points to a 404. But local cargo doc generates the correct link
Perhaps re-triggering the docs ci job fix this?
Thanks for catching that. We need to publish a new version soon anyway, so we can fix it in the next version's docs.
Hi - the following async move in the Tokio docs (https://tokio.rs/tokio/tutorial/spawning) seems redundant:
use tokio::net::TcpListener;
#[tokio::main]
async fn main() {
let listener = TcpListener::bind("127.0.0.1:6379").await.unwrap();
loop {
let (socket, _) = listener.accept().await.unwrap();
// A new task is spawned for each inbound socket. The socket is
// moved to the new task and processed there.
tokio::spawn(async move { // <----------------------------- THIS ONE
process(socket).await;
});
}
}
The process function takes a TcpStream, not a reference to one (and returns a future), so the following works just fine:
tokio::spawn(process(socket));
Am I missing something?
No. It's not needed in this particular case.
Is there any good doc explaining how Tokio channels are implemented?
howdi2e, just following the tut. bit confused ny https://tokio.rs/tokio/tutorial/spawning under subheading "Tasks" it seems the example is incorrect. or i am misunderstanding it. The text talks about "joinhandle" but its not in the example
The variable called handle has type JoinHandle.
ahh i get it. maybe it would be a little clearer if "Awaiting on JoinHandle returns a Result." was "Awaiting on a JoinHandle returns a Result."
Thanks @errant star
yes, its still there, even my framework is having this, thanks, let me change that
see what Rust Rover is showing here, I think, the people maintaining tokio have to change the documentation comments
Changing that really had a huge performance boost
Has there been any thoughts about formalizing cancellation safety signalling? I know Alice did a lot of work to document all of Tokio's APIs and their respective cancellation safety posture. I was thinking it'd would be interesting to have this be machine checkable. Something like
#[cancel_safe]
async fn foo() -> Bar;
struct Foo {}
#[cancel_safe]
impl Future for Foo {}
with an accompanying linter(ideally just part of cargo clippy), alternatively some external tool
The derive macro wouldn't need to do anything, it just exists as a marker
It's tricky because you can't really implement traits for the futures returned by async fn.
Also, when select! isn't used in a loop, you often don't need cancel safety. E.g., if the server is shutting down, it doesn't matter that data on some connection is lost when you cancel it.
I was thinking that the cancel_safe macro doesn't have to do anything at runtime, it would only serve as a marker for a linter tool
Also, when select! isn't used in a loop, you often don't need cancel safety. E.g., if the server is shutting down, it doesn't matter that data on some connection is lost when you cancel it.
True true
That's an interesting idea.
#[cancel_safe(required = false)]
tokio::select! {}
# Check the source tree based on these annotations, assume any future that isn't explicitly marked is cancel safe
cargo ensure-cancel-safe --strategy=assume-cancel-safe
Feels to me like it would need to be a stateful multipass lint(one pass to collect all types marked with cancel_safe and then another to check select! and other constructs where cancel safety matters)
I asked in the clippy Zulip channel about feasibility
I thought some more, and it would be pretty tricky for any async fn futures were the callsite is far from the usage
Messed around some more with this, ended up with a type level solution
@shell carbon very nice, this has the potential of being super helpful. I will carve out some time to look more into this
I don't find this to be true. I have seen many instances of select w/o a loop resulting in cancel safety related bugs in services
That is also pretty interesting...
I would really like to have some sort of cancel safety related tooling / lints by default for Tokio apps. The biggest challenge I see is how to manage backwards compatibility with select! Emitting a warning could work...
fwiw, this topic probably should be in #tokio-internals
Ah, yeah, I don't know about "often". But it is a complicating matter in that you cannot entirely outlaw such uses.
100% it cannot be outlawed
But, I think saying you should only pass cancel-safe futures to select is fair
"I don't care about the end state because it is isolated and dropped" is "cancel-safe"
ah, hm
that means that cancel-safe futures may be composed of stuff that is not cancel-safe, but I guess that's fine
Right, you are just saying "this is cancel safe because I say so"
@errant star another thing that I realized, cancel safety and panic safety are very similar (same thing?)
if a future isn't cancel safe, it probably isn't panic safe either
it just is that cancellation happens a lot more as part of the regular execution of code
I guess both are cases where code abrubtle gets stopped somewhere.
Cancel safety just only gets stopped at awaits, and panic safety is when you only get stopped at things that can panic.
right, panic safety would be stronger than cancellation safety
either way, some sort of macro could be cool
I agree, it also is worth thinking if it would be possible to minimize having to annotate fns
it would probably be too much friction if users have to annotate most of their fns
well, they only need to annotate what goes into a select
@errant star In theory, it would cascade out. A method is cancel safe only if the methods it calls are cancel safe
like, you want to be able to catch an issue with introduced non-cancel safe code in the call stack
yeah, but that's just not true
fn my_func(&mut self) {
self.num_active_calls += 1;
my_cancel_safe_fn().await;
self.num_active_calls -= 1;
}
@errant star sorry, I didn't mean it was a sufficient condition, just necessary
if all .await calls in a method are cancel safe, then the method might be cancel safe. If one .await call is not cancel safe, then the method is not cancel safe (forgetting the case where a user doesn't care if the method is technically cancel safe)
It can be opt in in a minor version, kind of like biased; is today.
tokio::select! {
require_cancel_safe;
/* branches */
}
Thanks, I've been meaning to come back to it too. Want to see if I can implement it end to end into select!
@shell carbon yeah... but if it is a lint, it might be possible to just default to warnings
anyway, it is worth exploring more imo
I'm not sure about my initial idea of using the derive macro only as a lint tool. I think it would break down across crate boundaries etc. The nice thing about how I implemented it is that it becomes a type level concern
I was thinking more a promise the future author makes, akin to AssertUnwindSafe, than something that would be checked recursively
problem is, if you require require_cancel_safe to apply it, nobody will use it
Also, if you just annotate the outer method, what will happen is someone will come by later and accidentally make the call stack not cancel safe
So, the ideal situation is, there is some recursive check through the call stack
there's also the question of whether we want another of these files: https://github.com/tokio-rs/tokio/blob/master/tokio/tests/async_send_sync.rs
Good point, but the same thing is true for manually implemented Send, Sync, as well as AssertUnwindSafe.
That said it should be possible to do recursively too, more work at compile time though. Also might require a lot of annotations unless it can be determined automatically in some instances
Yeah, in my mind, the difference for real world devs using tokio, is manually implementing Send, Sync, and AssertUnwindSafe basically never happens, and (at least Send, Sync) it is unsafe
I'm coming from the POV of teams of devs that are having this issue
where you can write safe async rust code that is not cancel safe
and it is a pretty common issue in practice (based on my observations)
I'm hoping, but haven't verified, that the wrapping future I used can always be optimised out by the compiler. If it can't that's another problem
Perhaps. I guess a lint does work around that
@shell carbon I think the "easy" work around would be to have it conditionally happen based on a cfg flag
and then there can be a cargo check-cancel-safe
or something like that
or, just have it done w/ debug assertions, etc...
Good idea!
w/ debug assertions is probably the best route
I think the most obvious recursive implementation of a #[cancel_safe] macro would require all .await calls to be be calling methods annotated w/ #[cancel_safe] as well or wrap the .await w/ cancel_safe(foo()).await (or something)
but that would be pretty noisy...
the question is, can we decide if an async method is cancel safe via static analysis. If so, another option would be to use a clippy lint type thing I think?
Anyway, those are my thoughts. I would love to see the question of "cancel safety" lints explored more
I chatted a bit with the clippy folks on Zulip and it's possible to look for the mere presence of an attribute without needing to change the resulting types as I did. clippy::has_significant_drop is an example of this kind of intert attribute
Also, it's possible to check across crate boundaries
Since clippy runs after macro expansion something is needed to identify the callsites where futures needs to be checked for cancel safety e.g. select! after expansion
Yeah, if we can do it as a clippy lint, that'd be awesome.
Yeah, although having it surfaced structurally in the type system also seems nice to me
A clippy member here, and just happened to pass by. (and obviously, a mention of clippy caught my eye)
While clippy can check for attributes, how would you propose to use it to lint cancel safety?
It depends on the method of signalling cancel safety. I'm toying with two atm, one based on a marker trait and another an inert attribute. For the attribute case we'd need to trace any binding for a future used in a context where cancel safety matters and verify that the block/function that generated the future was marked cancel safe
A definite con of the inert attribute case is that doing said tracing is probably quite complicated, whereas checking that a trait is implemented is trivial
I don't think it's that complicated to implement in clippy. You could go up or down the node tree and check all the futures.
Though personally I'd probably see if marker trait works for this, it'd be extensible to downstream crates (in addition to leveraging the type system, ofc).
The main issue with the trait approach is implementing it for futures returned by async fn.
Right. That requires changes from rustc.
Well, I think a clippy lint is beneficial for others as well.
Feel free to ping me in zulip/here if you need any help with the clippy part, if you decide to go that route @shell carbon
The marker trait would be really nice and my POC does support it, but it comes at the cost of having to add a wrapping future to all async fns which is a bit crap for compile times and potentially runtime
Any comments or perspectives on this one? https://github.com/tokio-rs/website/pull/765
Ah, sorry, I forgot about this. There is also https://github.com/tokio-rs/website/pull/764, though ...
No worries! Are these crates good with Tokio? My 2 cents on this topic is, that if a crate is weeded and sound it is fine and useful to mention it on Tokio docs. In the contrary case, it probably should not, as any mention reads as an endorsement, and some people will take that path.
It's fine ... if you are careful.
at least in the case of dashmap
I don't know whether the other crates have lock guards that are Send
We sure did try to make it work, and we could not. My theory (from practictioner perspective), is that some crates just are not a good match with Tokio, and are best to be avoided.
I never finished this draft, but I've talked about how to use dashmap correctly here: https://draft.ryhl.io/blog/shared-mutable-state/
curious to hear your perspective
I love this post, and I think we had once an email exchange where I asked if a subset of the content or link to the post could be added to Tokio docs, but did not get around to suggest those changes.
Ah, yes, now that you mention it, I remember that you emailed me about something along those lines.
Given the sad experience with dashmap (and more importantly, given that others have bad experiences as well), I went for the easy solution, which is to just remove it from the official Tokio docs..
Just to confirm, the issue you ran into is that it's too easy to end up blocking the thread / deadlocking?
Yeah, I did, I would rather not have anyone ever have to go to the deadlock party
The issue we had was with an actor based prototype, where we managed the objects in the memory with dashmap, and even with heavy weeding and best effort so to say, we could not have a solution that would not deadlock even under the lightest load.
Glad to have survived that, to tell the story, but yeah, I don't wish that to anyone. I also find it interesting, that even knowing about these issues, we just could not make it work.
Do you use spawn_local?
I can't remember, we buried that prototype somewhere. Could take a look later though. 🙏
I just looked through the two other libraries, and it looks like only flurry has non-Send guards.
In any case, it might be helpful to have lists of crates that are known to play well, and crates that could be risky, and the reasons why. Something digestible for the lazy or busy developer who might be building with tokio for the first time.
How about this: Since I have you now, I merge the other PR first, and then you rebase yours on top to remove dashmap and leapfrog from the list. That way, we merge both PRs, and the list ends up with a recommendation of flurry, which doesn't use deadlock-prone guards.
Fair enough, thanks!
ok, I merged it
We did not. Is it possible that some other crate in the mix did, and that could have caused the situation?
Will fix my PR by EOD
If there is a non-draft version of the above mentioned blog post, or you are happy to link the draft one, we could add a short phrase to the part where we mention these crates, about Send guards, and implications?
I guess it's fine if you link to the draft for now.
Looking at this page here https://tokio.rs/tokio/tutorial/shared-state, we might consider the following changes:
- Moving "Tasks, threads and contention" after the part on restructuring code, so that restructuring code is right after the compiler error
- Add a warning on Send MutexGuards in the compiler error section
- Add a link to the draft post in that section
- Mention Send MutexGuards again in the mutex crate section.
- Add link to the draft post there as well
These could be quick fixes, that would help guide more developers on the right path.
I can't help asking though, if the mini redis example itself, should not be refactored to use the wrapped mutex pattern as well? That might help to drive the point home. After all, I'm sure quite a few folks just copy whatever code the tutorial presents.
Also I have some additional questions on this page...
A common error is to unconditionally use `tokio::sync::Mutex` from within async code.
- Why is this an error? Is it because it is slower, and should be more or less the last option?
- Is it so, that holding a Send MutexGuard across an await will deadlock only in the multithreaded runtime?
(And no, we did not use the wrapped mutex pattern on our ill-faithed actor prototype, that used dashmap. We probably should have.)
I'm also curious about when to go for wrapped mutex, and when might actor model be suitable. DB (unless in the process' memory) means IO, so should the DB be rather managed by an actor, than a wrapped mutex?
Ah, of course, an external database is not really shared mutable state from the application's perspective, in the same sense. So it would not count as an IO resource, typically, I guess.
It can deadlock in both current and multi thread runtimes. It’s actually easier to get a std mutex to deadlock in a current thread runtime because the deadlock is caused by the task that holds the lock being scheduled on the same worker as the task which is trying to obtain the lock (for worst case in a way that it will never get stolen - which means either a current thread runtime or that the task is in the lifo slot or maybe the other workers are all idle).
The tokio mutex is slower than the std mutex. So if you’re only locking some data structure in memory that doesn’t require awaiting, then a std mutex is often (perhaps always) more efficient.
Right, I was thinking of a "thread" holding the lock, and therefore for a moment asking what would be the issue, but, tasks are green threads, got it.
I wrote about why holding mutex guards across await points can be problematic here: https://hegdenu.net/posts/understanding-async-await-3/
It has some diagrams too.
Yeah, this was explained in another section, just wanted to know if there were other reasons as well. 🙂
Thanks, I will check it out! I think all in all I would like to try to improve the official documentation, but the better grasp one has of the concepts, the easier it is of course. And sometimes it makes sense to link to other posts as well.
I think that would be great!
Here's are the quick fixes more or less as outlined above: https://github.com/tokio-rs/website/pull/768
Just to be 100% clear: "Spawn a task to manage the state and use message passing to operate on it" refers to the Actor model, right, as outlined in this blog post https://draft.ryhl.io/blog/actors-with-tokio/ ?
In general, what do you think about sanitizing all the mutex examples, so that they would all have wrapped mutexes, instead of passing Arc<Mutex<_>> around? I believe this would make it less likely that tokio users end up with problematic code. (I'm volunteering of course, if the timeline is flexible.)
@errant star @spring crypt Any comments/feedback?
I'll have a proper read through tomorrow. Sorry, kind of busy day today.
Hey, no worries! Take your time 🙏
@errant star Any further feedback on this PR? https://github.com/tokio-rs/website/pull/768
Lgtm. Feel free to merge it @spring crypt, otherwise I can do it when I get to a laptop. (I'm on vacation)
Merged. Thanks Alice.
Hello Guys
Hopping this is the right place to ask
I'm reading the tutorial (https://tokio.rs/tokio/tutorial/channels) and using the oneshot
example I got the following question:
based on the image's code
on the t2 spaws
tx2 is a cloned refence from tx but using oneshot channel kind there is no way to have multiple senders.
Am I missing something?
(Maybe It can be possible using Arc)
tx and tx2 are mpsc channels, not oneshot
so you can have more than one
Yes, the screnshot is from the refactor section once is using oneshoot
that code is the example code
on the website
btw thanks for your response Alice
I don't really understand the question
Sorry
The situation is:
I was confused with the example reading again the article I got my error
Thanks for your help and sorry if my words were rude (My english is not perfect :S )
hi
I am dracula
I can't read the doc because its too shiny
we want dark theme please
I think we merged a PR that makes it respect your browser setting
what does this mean?
I think it would be great if the Tokio website also offered an mdbook version of the tutorials, which also support many themes.
https://github.com/tokio-rs/website/pull/701 seems not merged
I tried merging it, but it seems like there's some issues. Anybody who can help? 😦
Dark mode
Does anyone want to take the time to update the website to the new next.js? Otherwise I'm just going to disable dependabot again. I don't really think staying up to date brings any real value, so I don't care to spend the time myself.
yeah - I'll take a look. Do you have any constraints that aren't obvious on this? Or just make it simple to merge and pass CI.
- Pass CI.
- Don't change how the website looks.
👍
Thanks!
https://github.com/tokio-rs/website/pull/790 is all the npm packages except next.js and bulma as a preperation for the next.js upgrade. I'd suggest that it's worthwhile merging and then doing the other two on top of that. I didn't see any obvious breaks. in testing.
Merged
Thanks
Bulma is still a bit of a pain as 1.0 broke a bunch of CSS variables which are defined in the site's scss file, so I'm punting that one for now. I'd recommend this would need someone that enjoys spelunking through css / scss / etc. to upgrade it.
That said, bulma 1.0 has dark mode theme support built-in, so this negates the need for the dark mode PR.
Man, the javascript ecosystem is a mess. Have they never heard of backwards compatibility?
You can just swap
bulma@0.9.4/css/bulma.min.csswithbulma@1.0.0/css/bulma.min.cssand everything will work. Things will look slightly different, but they will still work.
https://bulma.io/documentation/start/migrating-to-v1/
This is crazy to me. We had a designer spend a lot of time on putting together the current look. Changing the design because of this kind of thing is bullshit.
If I was going to make some PRs to change the docs, is there a preference for larger PRs with multiple commits moving things around, or small single focus PRs. The latter can get annoying to work on especially when there's dependent changes between each commit, so I'd prefer the former in this case if that's ok with the powers that be.
"github, i am once again asking you to give us stacked diffs"
the former is fine
Even "github, please allow me to manually choose which commits are shown in the diff view" would suffice there.
Hi guys, I'm a beginner learning tokio tutorial. I got an issue in the image example, which can be found here: https://tokio.rs/tokio/tutorial/async#a-few-loose-ends. I noticed that the main function returns before the delay future completes. I don't know if it's intensional because it's just a demonstration. And there is still race condition even though we wrap the waker in a mutex, as it might be updated after the timer thread returns. This could result in mis-scheduling an expired future. This example is really confusing, and I still don't know how to handle this scenario 😩
The example returning before main is just because it's an example. The point is that the waker may need to be updated after first poll.
I got you, thanks a lot. And how to maintain the waker's consistency, what if an old waker wakes up the future before the waker gets updated. I do wish there is the boilerplate code to address the race condition. If I’m implementing a custom poll function and fail to handle the waker properly, it could lead to unexpected issues.
The futures crate has a utility called AtomicWaker that is often useful for this.
It works. Thank you 🥳
Hello community
I'll be talking about multithreading and Async in Rust
It will be for a community meetup, just sharing what I know, just for fun
Do you know if there are diagrams/architectures related tokio?
(I'm looking for visual representations of the runtime/workers/global queue/etc)
Thanks for your help 🙂
https://tokio.rs/blog/2019-10-scheduler has some good diagrams
obviously only a subset are actually relevant to tokio, the first few illustrate other approaches
You didn't ask, but I highly recommend including concepts from alice's blog on blocking: https://ryhl.io/blog/async-what-is-blocking/
That is IME one of the harder concepts for people to grasp when reasoning about how/when to use tokio. Essentially, that a thread != a task, that threads pre-empt each other, whereas tasks execute sequentially on a given worker thread and therefore can have very unpleasant effects if one task takes a long time.
I tend to introduce this by discussing the difference between concurrency and parallelism.
Thanks for your help 🙂
Really appreciate it
The meetup will be in Spanish
I'll share the link to the streaming just in case anyone want to attend 🙂
The schedule is:
January 17, 2025
7:00 pm UTC - 6 (Mexico central time)
Nice, probably toss it in #tokio-users as well for visibility
error: failed to select a version for the requirement tokio-macros = "~2.5.0"
candidate versions found which didn't match: 2.4.0, 2.3.0, 2.2.0, ...
location searched: crates.io index
required by package tokio v1.43.0
... which satisfies dependency tokio = "^1.43.0" (locked to 1.43.0) of package Omama v0.1.0 (/run/media/allawiii/projects/zed/Rust/Omama)
if you are looking for the prerelease package it needs to be specified explicitly
tokio-macros = { version = "0.2.0-alpha.6" }
[dependencies]
tokio = { version = "1.43.0", features = ["full"] }
Delete your lock file
cargo clean?
rm Cargo.lock
I don't think cargo clean deletes it
yes cargo clean doesnot delete it
thanks every thing works properly
hi guys. What are the good reading to understand how tokio works internally? like does it creates threads/processes, when, how? is there something like event-loop?
The source code probably 🙂 there are some blog posts that cover it though, if you look a few years back.
I'd start with pollster, then mio, then https://fasterthanli.me/articles/understanding-rust-futures-by-going-way-too-deep, then the tokio codebase.
The tokio codebase is not always the most accessible, but with some prior reading you should figure out what to look for
what is pollster and mio?
Two crates that are good to familiarise yourself with. Pollster is the smallest functional async runtime, that is reasonable to read through. Mio is an async-io abstraction layer that tokio uses
is it true that u cannot change 1- the number of threads in rust tokio during the runtime; 2-tokio involves non premptive threads
yes to both
you cannot change the number of worker threads, and tasks on a tokio runtime are cooperative and do not preempt each other
#tokio-users is a better place to ask
this is for work on the docs of tokio
so, tokio::io::copy ,
when reqwesting large file from web ,
it doesnt download the whole file in the memory first!!?
but instead it stream each arrived chunk directly to the harddisk ?
Yes
thank you
async fn write_to_disk(mut tool: Response, full_path: PathBuf) -> OResult<PathBuf> {
let mut file = File::create(&full_path).await?;
let mut writer = BufWriter::new(file);
copy(&mut tool.bytes(), &mut writer);
Ok(full_path)
}
tool.chunk() returns bytes::Bytes object
Calling tool.chunk() reads the entire thing into memory, so don't do that.
or ... no
anyway
hold on
your screenshot shows a function called tool.bytes(), not tool.chunk()
which is it?
rustc: {async fn body of reqwest::Response::bytes()} cannot be unpinned
within impl futures_util::Future<Output = std::result::Result<bytes::bytes::Bytes, reqwest::Error>>, the trait std::marker::Unpin is not implemented for {async fn body of reqwest::Response::bytes()}
consider using the pin! macro
consider using Box::pin if you need to access the pinned value outside of the current scope
returns future of Result
- If you have a
Bytesobject, then you don't usetokio::io::copy. You just callwriter.write_all(&my_bytes_obj). - If you want to stream the data, then you don't use
.bytes().
What you want is chunk, not bytes.
taking its example
let mut res = reqwest::get("https://hyper.rs").await?;
while let Some(chunk) = res.chunk().await? {
println!("Chunk: {chunk:?}");
}
and updating it:
I want , every arrived peice of data to be directly stored on disk , no need to hold them in the memory
let mut res = reqwest::get("https://hyper.rs").await?;
while let Some(chunk) = res.chunk().await? {
writer.write_all(&chunk).await?;
}
this will do what you want
async fn write_to_disk(mut tool: Response, full_path: PathBuf) -> OResult<PathBuf> {
let mut file = File::create(&full_path).await?;
while let Some(chunk) = tool.chunk().await? {
file.write_all(&chunk).await?;
}
Ok(full_path)
}
is this the correct way?
Call file.flush().await? after the loop. Otherwise it looks good
why I need to flush it ?
what flush means here ?
It means to wait for writing to finish.
ooh
thats great point
flush functionality also exists in stdout()
now I understood what it means , thanks
so when the last chunk reaches its destination the file buffer sends something like acknowledgment to let the program know that all the data were written successfully alright ?
or something like that .
Tokio's file performs the writing on a background thread. Calling file.write just starts the write without waiting for it.
so no need to call flush function?
yes need to call flush function
understood
thank you
I am facing a nightmare here :
rustc: type annotations needed
cannot satisfy `_: std::future::Future
I'll direct you to #tokio-users or #1324762623006347264
<@&1207366540299735151> Got 1365779859837816892 with a Steam scam
Not the right channel, just the one I had clicked on, sorry
I finished The Rust Programming Language ("the book") last weekend, this weekend I'm starting the Tokio tutorial here: https://tokio.rs/tokio/tutorial
I'm a mid-career software engineer (C, Obj-C, Swift), a Rust beginner, and a Tokio beginner. Are #1324762623006347264 and #tokio-users the best place for beginner questions if they arise?
yes. Can also recommend #beginners on the official Rust discord server, and the help channels on the community Rust discord server
https://users.rust-lang.org/ is also pretty good for meatier questions. Actually your problem down well with context etc. and giving it a permalink so others can find the info on the problem is a good thing IMO.
because Semaphore::acquire* are async fns, I realized they can't reserve your slot in the fairness queue until the result is polled at least once. is this something tokio has (intentionally?) committed to (vs leaving open the possibility of replacing the async fns with -> impl Future fns that reserve the queue before constructing the future) and should the answer to that question be explained in the docs?
The queue is "intrusive", so acquire futures cannot be enqueued unless pinned.
It would not be a breaking change to adopt a similar API of Notify https://docs.rs/tokio/latest/tokio/sync/futures/struct.Notified.html#method.enable
Could someone please clarify if it is true that Tokio can handle tasks preemptively, and if so, how this wouldn’t potentially cause deadlocks in some cases?
Tasks are suspended only at .await points.
Sorry for the inconvenience, but do you happen to have a link to somewhere I can read in detail about how Tokio handles tasks preemptively? The .await thing makes a lot of sense to me, but I understand that it’s the task that yields cooperatively
Start here: https://ryhl.io/blog/async-what-is-blocking/
As follows-ups, you may find https://tokio.rs/tokio/tutorial/async or https://tokio.rs/blog/2020-04-preemption interesting.
Gracias
Hey, dumb question here! I've been experimenting a bit and it seems like the main task (the one from block_on or #[tokio::main]) isn't affected by work stealing and doesn’t migrate between threads. Is that right?
Correct. You can guarantee this because block_on<F: Future>(fut: F) -> R has no F: Send bound
But importantly, the block_on thread (if using a multi_threaded runtime) only executes that one future - it does not participate in the runtime at all
Ok, thanks
If you use a current_thread runtime, then block_on does also run other tasks
Yes, got it
Ou
Hi, I know this place is for talking about Tokio, but maybe someone could give me a quick answer. Could someone tell me if NLL depends on variance to coerce lifetimes?
The borrow checker is necessarily aware of variance. However, lifetimes are technically never coerced. Lifetimes rules are only validated for correctness
If I didn't explain it well, it's actually the exact opposite—because Rust cannot determine its exact lifetime at a point, it extends it to its true duration. It happened to me when taking a &mut inside a type that made it invariant, and I can't reference it again even though I didn't use the type that borrowed it anymore. Not sure if I explained myself clearly or just spoke a tongue twister
Alguém tem o link da Tokio
Po
Eu
Rola
Hi, could you help me with this question:
It’s about what happens when the blocking thread pool is completely in use. Let’s assume we are on Linux. For example, let’s take Notify without going into too much detail and see if I’m correct: when calling notified().await, Tokio likely spawns one of its blocking pool threads and puts it to sleep using a futex. Then, once someone wakes it up, it reschedules the task that triggered it.
If this is the case, then a non-blocking worker would actually block if it needs to use a thread from the blocking pool and that pool is completely occupied.
Maybe Notify can be handled without spawning any threads, but it’s just to illustrate the idea.
This is why the docs say that spawn_blocking is intended for bounded work only.
https://docs.rs/tokio/latest/tokio/task/fn.spawn_blocking.html#when-to-use-spawn_blocking-vs-dedicated-threads
There are no primitives in Tokio that work like the one you mentioned. Tokio only uses spawn_blocking in cases where the operation is something that will exit on its own eventually without outside input.
Such as reading a file.
Notify works by storing a Waker for the task that is waiting. You can think of a Waker as a closure that enqueues the async task to the queue of tasks that need to be polled again.
This way, Notify can function without taking up a blocking thread.
Ok, thank you very much. I assume that Notify has a counter and keeps the Waker on the heap while there are live instances, so it can be handled entirely in user space. And as you said, Tokio does not subtly spawn blocking threads when we are waiting on a future, so I suppose my doubt has been clarified. Thanks.