1//! Helper types for prepared statement caching
2//!
3//! A primer on prepared statement caching in Diesel
4//! ------------------------------------------------
5//!
6//! Diesel uses prepared statements for virtually all queries. This is most
7//! visible in our lack of any sort of "quoting" API. Values must always be
8//! transmitted as bind parameters, we do not support direct interpolation. The
9//! only method in the public API that doesn't require the use of prepared
10//! statements is [`SimpleConnection::batch_execute`](super::SimpleConnection::batch_execute).
11//!
12//! In order to avoid the cost of re-parsing and planning subsequent queries,
13//! by default Diesel caches the prepared statement whenever possible. This
14//! can be customized by calling
15//! [`Connection::set_cache_size`](super::Connection::set_prepared_statement_cache_size).
16//!
17//! Queries will fall into one of three buckets:
18//!
19//! - Unsafe to cache
20//! - Cached by SQL
21//! - Cached by type
22//!
23//! A query is considered unsafe to cache if it represents a potentially
24//! unbounded number of queries. This is communicated to the connection through
25//! [`QueryFragment::is_safe_to_cache_prepared`]. While this is done as a full AST
26//! pass, after monomorphisation and inlining this will usually be optimized to
27//! a constant. Only boxed queries will need to do actual work to answer this
28//! question.
29//!
30//! The majority of AST nodes are safe to cache if their components are safe to
31//! cache. There are at least 4 cases where a query is unsafe to cache:
32//!
33//! - queries containing `IN` with bind parameters
34//! - This requires 1 bind parameter per value, and is therefore unbounded
35//! - `IN` with subselects are cached (assuming the subselect is safe to
36//! cache)
37//! - `IN` statements for postgresql are cached as they use `= ANY($1)` instead
38//! which does not cause an unbound number of binds
39//! - `INSERT` statements with a variable number of rows
40//! - The SQL varies based on the number of rows being inserted.
41//! - `UPDATE` statements
42//! - Technically it's bounded on "number of optional values being passed to
43//! `SET` factorial" but that's still quite high, and not worth caching
44//! for the same reason as single row inserts
45//! - `SqlLiteral` nodes
46//! - We have no way of knowing whether the SQL was generated dynamically or
47//! not, so we must assume that it's unbounded
48//!
49//! For queries which are unsafe to cache, the statement cache will never insert
50//! them. They will be prepared and immediately released after use (or in the
51//! case of PG they will use the unnamed prepared statement).
52//!
53//! For statements which are able to be cached, we then have to determine what
54//! to use as the cache key. The standard method that virtually all ORMs or
55//! database access layers use in the wild is to store the statements in a
56//! hash map, using the SQL as the key.
57//!
58//! However, the majority of queries using Diesel that are safe to cache as
59//! prepared statements will be uniquely identified by their type. For these
60//! queries, we can bypass the query builder entirely. Since our AST is
61//! generally optimized away by the compiler, for these queries the cost of
62//! fetching a prepared statement from the cache is the cost of [`HashMap<u32,
63//! _>::get`](std::collections::HashMap::get), where the key we're fetching by is a compile time constant. For
64//! these types, the AST pass to gather the bind parameters will also be
65//! optimized to accessing each parameter individually.
66//!
67//! Determining if a query can be cached by type is the responsibility of the
68//! [`QueryId`] trait. This trait is quite similar to `Any`, but with a few
69//! differences:
70//!
71//! - No `'static` bound
72//! - Something being a reference never changes the SQL that is generated,
73//! so `&T` has the same query id as `T`.
74//! - `Option<TypeId>` instead of `TypeId`
75//! - We need to be able to constrain on this trait being implemented, but
76//! not all types will actually have a static query id. Hopefully once
77//! specialization is stable we can remove the `QueryId` bound and
78//! specialize on it instead (or provide a blanket impl for all `T`)
79//! - Implementors give a more broad type than `Self`
80//! - This really only affects bind parameters. There are 6 different Rust
81//! types which can be used for a parameter of type `timestamp`. The same
82//! statement can be used regardless of the Rust type, so [`Bound<ST, T>`](crate::expression::bound::Bound)
83//! defines its [`QueryId`] as [`Bound<ST, ()>`](crate::expression::bound::Bound).
84//!
85//! A type returning `Some(id)` or `None` for its query ID is based on whether
86//! the SQL it generates can change without the type changing. At the moment,
87//! the only type which is safe to cache as a prepared statement but does not
88//! have a static query ID is something which has been boxed.
89//!
90//! One potential optimization that we don't perform is storing the queries
91//! which are cached by type ID in a separate map. Since a type ID is a u64,
92//! this would allow us to use a specialized map which knows that there will
93//! never be hashing collisions (also known as a perfect hashing function),
94//! which would mean lookups are always constant time. However, this would save
95//! nanoseconds on an operation that will take microseconds or even
96//! milliseconds.
9798use crate::util::std_compat::Entry;
99use alloc::borrow::Cow;
100use alloc::boxed::Box;
101use alloc::string::String;
102use alloc::vec::Vec;
103use core::any::TypeId;
104use core::hash::Hash;
105use core::ops::{Deref, DerefMut};
106107use strategy::{
108LookupStatementResult, StatementCacheStrategy, WithCacheStrategy, WithoutCacheStrategy,
109};
110111use crate::backend::Backend;
112use crate::connection::InstrumentationEvent;
113use crate::query_builder::*;
114use crate::result::QueryResult;
115116use super::{CacheSize, Instrumentation};
117118/// Various interfaces and implementations to control connection statement caching.
119#[allow(unreachable_pub)]
120pub mod strategy;
121122/// A prepared statement cache
123#[allow(missing_debug_implementations, unreachable_pub)]
124#[cfg_attr(
125 diesel_docsrs,
126 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
127)]
128pub struct StatementCache<DB: Backend, Statement> {
129 cache: Box<dyn StatementCacheStrategy<DB, Statement>>,
130// increment every time a query is cached
131 // some backends might use it to create unique prepared statement names
132cache_counter: u64,
133}
134135/// A helper type that indicates if a certain query
136/// is cached inside of the prepared statement cache or not
137///
138/// This information can be used by the connection implementation
139/// to signal this fact to the database while actually
140/// preparing the statement
141#[derive(#[automatically_derived]
#[allow(unreachable_pub)]
impl ::core::fmt::Debug for PrepareForCache {
#[inline]
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match self {
PrepareForCache::Yes { counter: __self_0 } =>
::core::fmt::Formatter::debug_struct_field1_finish(f, "Yes",
"counter", &__self_0),
PrepareForCache::No => ::core::fmt::Formatter::write_str(f, "No"),
}
}
}Debug, #[automatically_derived]
#[allow(unreachable_pub)]
impl ::core::clone::Clone for PrepareForCache {
#[inline]
fn clone(&self) -> PrepareForCache {
let _: ::core::clone::AssertParamIsClone<u64>;
*self
}
}Clone, #[automatically_derived]
#[allow(unreachable_pub)]
impl ::core::marker::Copy for PrepareForCache { }Copy)]
142#[cfg_attr(
143 diesel_docsrs,
144 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
145)]
146#[allow(unreachable_pub)]
147pub enum PrepareForCache {
148/// The statement will be cached
149Yes {
150/// Counter might be used as unique identifier for prepared statement.
151#[allow(dead_code)]
152counter: u64,
153 },
154/// The statement won't be cached
155No,
156}
157158#[allow(clippy::new_without_default, unreachable_pub)]
159impl<DB, Statement> StatementCache<DB, Statement>
160where
161DB: Backend + 'static,
162 Statement: Send + 'static,
163 DB::TypeMetadata: Send + Clone,
164 DB::QueryBuilder: Default,
165StatementCacheKey<DB>: Hash + Eq,
166{
167/// Create a new prepared statement cache using [`CacheSize::Unbounded`] as caching strategy.
168#[allow(unreachable_pub)]
169pub fn new() -> Self {
170StatementCache {
171 cache: Box::new(WithCacheStrategy::default()),
172 cache_counter: 0,
173 }
174 }
175176/// Set caching strategy from predefined implementations
177pub fn set_cache_size(&mut self, size: CacheSize) {
178if self.cache.cache_size() != size {
179self.cache = match size {
180 CacheSize::Unbounded => Box::new(WithCacheStrategy::default()),
181 CacheSize::Disabled => Box::new(WithoutCacheStrategy::default()),
182 }
183 }
184 }
185186/// Setting custom caching strategy. It is used in tests, to verify caching logic
187#[allow(dead_code)]
188pub(crate) fn set_strategy<Strategy>(&mut self, s: Strategy)
189where
190Strategy: StatementCacheStrategy<DB, Statement> + 'static,
191 {
192self.cache = Box::new(s);
193 }
194195/// Prepare a query as prepared statement
196 ///
197 /// This functions returns a prepared statement corresponding to the
198 /// query passed as `source` with the bind values passed as `bind_types`.
199 /// If the query is already cached inside this prepared statement cache
200 /// the cached prepared statement will be returned, otherwise `prepare_fn`
201 /// will be called to create a new prepared statement for this query source.
202 /// The first parameter of the callback contains the query string, the second
203 /// parameter indicates if the constructed prepared statement will be cached or not.
204 /// See the [module](self) documentation for details
205 /// about which statements are cached and which are not cached.
206//
207 // Notes:
208 // This function takes explicitly a connection and a function pointer (and no generic callback)
209 // as argument to ensure that we don't leak generic query types into the prepare function
210#[allow(unreachable_pub)]
211 #[cfg(any(
212 feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes",
213 feature = "__sqlite-shared",
214 feature = "mysql"
215))]
216pub fn cached_statement<'a, T, R, C>(
217&'a mut self,
218 source: &T,
219 backend: &DB,
220 bind_types: &[DB::TypeMetadata],
221 conn: C,
222 prepare_fn: fn(C, &str, PrepareForCache, &[DB::TypeMetadata]) -> R,
223 instrumentation: &mut dyn Instrumentation,
224 ) -> R::Return<'a>
225where
226T: QueryFragment<DB> + QueryId,
227 R: StatementCallbackReturnType<Statement, C> + 'a,
228 {
229self.cached_statement_non_generic(
230 T::query_id(),
231source,
232backend,
233bind_types,
234conn,
235prepare_fn,
236instrumentation,
237 )
238 }
239240/// Prepare a query as prepared statement
241 ///
242 /// This function closely mirrors `Self::cached_statement` but
243 /// eliminates the generic query type in favour of a trait object
244 ///
245 /// This can be easier to use in situations where you already turned
246 /// the query type into a concrete SQL string
247// Notes:
248 // This function takes explicitly a connection and a function pointer (and no generic callback)
249 // as argument to ensure that we don't leak generic query types into the prepare function
250#[allow(unreachable_pub)]
251 #[allow(clippy::too_many_arguments)] // we need all of them
252pub fn cached_statement_non_generic<'a, R, C>(
253&'a mut self,
254 maybe_type_id: Option<TypeId>,
255 source: &dyn QueryFragmentForCachedStatement<DB>,
256 backend: &DB,
257 bind_types: &[DB::TypeMetadata],
258 conn: C,
259 prepare_fn: fn(C, &str, PrepareForCache, &[DB::TypeMetadata]) -> R,
260 instrumentation: &mut dyn Instrumentation,
261 ) -> R::Return<'a>
262where
263R: StatementCallbackReturnType<Statement, C> + 'a,
264 {
265Self::cached_statement_non_generic_impl(
266self.cache.as_mut(),
267maybe_type_id,
268source,
269backend,
270bind_types,
271conn,
272 |conn, sql, is_cached| {
273if is_cached {
274instrumentation.on_connection_event(InstrumentationEvent::CacheQuery { sql });
275self.cache_counter += 1;
276prepare_fn(
277conn,
278sql,
279 PrepareForCache::Yes {
280 counter: self.cache_counter,
281 },
282bind_types,
283 )
284 } else {
285prepare_fn(conn, sql, PrepareForCache::No, bind_types)
286 }
287 },
288 )
289 }
290291/// Reduce the amount of monomorphized code by factoring this via dynamic dispatch
292 /// There will be only one instance of `R` for diesel (and a different single instance for diesel-async)
293 /// There will be only a instance per connection type `C` for each connection that
294 /// uses this prepared statement impl, this closely correlates to the types `DB` and `Statement`
295 /// for the overall statement cache impl
296fn cached_statement_non_generic_impl<'a, R, C>(
297 cache: &'a mut dyn StatementCacheStrategy<DB, Statement>,
298 maybe_type_id: Option<TypeId>,
299 source: &dyn QueryFragmentForCachedStatement<DB>,
300 backend: &DB,
301 bind_types: &[DB::TypeMetadata],
302 conn: C,
303 prepare_fn: impl FnOnce(C, &str, bool) -> R,
304 ) -> R::Return<'a>
305where
306R: StatementCallbackReturnType<Statement, C> + 'a,
307 {
308// this function cannot use the `?` operator
309 // as we want to abstract over returning `QueryResult<MaybeCached>` and
310 // `impl Future<Output = QueryResult<MaybeCached>>` here
311 // to share the prepared statement cache implementation between diesel and
312 // diesel_async
313 //
314 // For this reason we need to match explicitly on each error and call `R::from_error()`
315 // to construct the right error return variant
316let cache_key =
317match StatementCacheKey::for_source(maybe_type_id, source, bind_types, backend) {
318Ok(o) => o,
319Err(e) => return R::from_error(e),
320 };
321let is_safe_to_cache_prepared = match source.is_safe_to_cache_prepared(backend) {
322Ok(o) => o,
323Err(e) => return R::from_error(e),
324 };
325// early return if the statement cannot be cached
326if !is_safe_to_cache_prepared {
327let sql = match cache_key.sql(source, backend) {
328Ok(sql) => sql,
329Err(e) => return R::from_error(e),
330 };
331return prepare_fn(conn, &sql, false).map_to_no_cache();
332 }
333let entry = cache.lookup_statement(cache_key);
334match entry {
335// The statement is already cached
336LookupStatementResult::CacheEntry(Entry::Occupied(e)) => {
337 R::map_to_cache(e.into_mut(), conn)
338 }
339// The statement is not cached but there is capacity to cache it
340LookupStatementResult::CacheEntry(Entry::Vacant(e)) => {
341let sql = match e.key().sql(source, backend) {
342Ok(sql) => sql,
343Err(e) => return R::from_error(e),
344 };
345let st = prepare_fn(conn, &sql, true);
346st.register_cache(|stmt| e.insert(stmt))
347 }
348// The statement is not cached and there is no capacity to cache it
349LookupStatementResult::NoCache(cache_key) => {
350let sql = match cache_key.sql(source, backend) {
351Ok(sql) => sql,
352Err(e) => return R::from_error(e),
353 };
354prepare_fn(conn, &sql, false).map_to_no_cache()
355 }
356 }
357 }
358}
359360/// Implemented for all `QueryFragment`s, dedicated to dynamic dispatch within the context of
361/// `statement_cache`
362///
363/// We want the generated code to be as small as possible, so for each query passed to
364/// [`StatementCache::cached_statement`] the generated assembly will just call a non generic
365/// version with dynamic dispatch pointing to the VTABLE of this minimal trait
366///
367/// This preserves the opportunity for the compiler to entirely optimize the `construct_sql`
368/// function as a function that simply returns a constant `String`.
369#[allow(unreachable_pub)]
370#[cfg_attr(
371 diesel_docsrs,
372 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
373)]
374pub trait QueryFragmentForCachedStatement<DB> {
375/// Convert the query fragment into a SQL string for the given backend
376fn construct_sql(&self, backend: &DB) -> QueryResult<String>;
377378/// Check whether it's safe to cache the query
379fn is_safe_to_cache_prepared(&self, backend: &DB) -> QueryResult<bool>;
380}
381382impl<T, DB> QueryFragmentForCachedStatement<DB> for T
383where
384DB: Backend,
385 DB::QueryBuilder: Default,
386 T: QueryFragment<DB>,
387{
388fn construct_sql(&self, backend: &DB) -> QueryResult<String> {
389let mut query_builder = DB::QueryBuilder::default();
390self.to_sql(&mut query_builder, backend)?;
391Ok(query_builder.finish())
392 }
393394fn is_safe_to_cache_prepared(&self, backend: &DB) -> QueryResult<bool> {
395 <T as QueryFragment<DB>>::is_safe_to_cache_prepared(self, backend)
396 }
397}
398399/// Wraps a possibly cached prepared statement
400///
401/// Essentially a customized version of [`Cow`]
402/// that does not depend on [`ToOwned`]
403#[allow(missing_debug_implementations, unreachable_pub)]
404#[cfg_attr(
405 diesel_docsrs,
406 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
407)]
408#[non_exhaustive]
409pub enum MaybeCached<'a, T: 'a> {
410/// Contains a not cached prepared statement
411CannotCache(T),
412/// Contains a reference cached prepared statement
413Cached(&'a mut T),
414}
415416/// This trait abstracts over the type returned by the prepare statement function
417///
418/// The main use-case for this abstraction is to share the same statement cache implementation
419/// between diesel and diesel-async.
420#[cfg_attr(
421 diesel_docsrs,
422 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
423)]
424#[allow(unreachable_pub)]
425pub trait StatementCallbackReturnType<S: 'static, C> {
426/// The return type of `StatementCache::cached_statement`
427 ///
428 /// Either a `QueryResult<MaybeCached<S>>` or a future of that result type
429type Return<'a>;
430431/// Create the return type from an error
432fn from_error<'a>(e: diesel::result::Error) -> Self::Return<'a>;
433434/// Map the callback return type to the `MaybeCached::CannotCache` variant
435fn map_to_no_cache<'a>(self) -> Self::Return<'a>
436where
437Self: 'a;
438439/// Map the cached statement to the `MaybeCached::Cached` variant
440fn map_to_cache(stmt: &mut S, conn: C) -> Self::Return<'_>;
441442/// Insert the created statement into the cache via the provided callback
443 /// and then turn the returned reference into `MaybeCached::Cached`
444fn register_cache<'a>(
445self,
446 callback: impl FnOnce(S) -> &'a mut S + Send + 'a,
447 ) -> Self::Return<'a>
448where
449Self: 'a;
450}
451452impl<S, C> StatementCallbackReturnType<S, C> for QueryResult<S>
453where
454S: 'static,
455{
456type Return<'a> = QueryResult<MaybeCached<'a, S>>;
457458fn from_error<'a>(e: diesel::result::Error) -> Self::Return<'a> {
459Err(e)
460 }
461462fn map_to_no_cache<'a>(self) -> Self::Return<'a> {
463self.map(MaybeCached::CannotCache)
464 }
465466fn map_to_cache(stmt: &mut S, _conn: C) -> Self::Return<'_> {
467Ok(MaybeCached::Cached(stmt))
468 }
469470fn register_cache<'a>(
471self,
472 callback: impl FnOnce(S) -> &'a mut S + Send + 'a,
473 ) -> Self::Return<'a>
474where
475Self: 'a,
476 {
477Ok(MaybeCached::Cached(callback(self?)))
478 }
479}
480481impl<T> Dereffor MaybeCached<'_, T> {
482type Target = T;
483484fn deref(&self) -> &Self::Target {
485match *self {
486 MaybeCached::CannotCache(ref x) => x,
487 MaybeCached::Cached(ref x) => x,
488 }
489 }
490}
491492impl<T> DerefMutfor MaybeCached<'_, T> {
493fn deref_mut(&mut self) -> &mut Self::Target {
494match *self {
495 MaybeCached::CannotCache(ref mut x) => x,
496 MaybeCached::Cached(ref mut x) => x,
497 }
498 }
499}
500501/// The lookup key used by [`StatementCache`] internally
502///
503/// This can contain either a at compile time known type id
504/// (representing a statically known query) or a at runtime
505/// calculated query string + parameter types (for queries
506/// that may change depending on their parameters)
507#[allow(missing_debug_implementations, unreachable_pub)]
508#[derive(#[automatically_derived]
#[allow(missing_debug_implementations, unreachable_pub)]
impl<DB: ::core::hash::Hash + Backend> ::core::hash::Hash for
StatementCacheKey<DB> where DB::TypeMetadata: ::core::hash::Hash {
#[inline]
fn hash<__H: ::core::hash::Hasher>(&self, state: &mut __H) {
let __self_discr = ::core::intrinsics::discriminant_value(self);
::core::hash::Hash::hash(&__self_discr, state);
match self {
StatementCacheKey::Type(__self_0) =>
::core::hash::Hash::hash(__self_0, state),
StatementCacheKey::Sql { sql: __self_0, bind_types: __self_1 } =>
{
::core::hash::Hash::hash(__self_0, state);
::core::hash::Hash::hash(__self_1, state)
}
}
}
}Hash, #[automatically_derived]
#[allow(missing_debug_implementations, unreachable_pub)]
impl<DB: ::core::cmp::PartialEq + Backend> ::core::cmp::PartialEq for
StatementCacheKey<DB> where DB::TypeMetadata: ::core::cmp::PartialEq {
#[inline]
fn eq(&self, other: &StatementCacheKey<DB>) -> bool {
let __self_discr = ::core::intrinsics::discriminant_value(self);
let __arg1_discr = ::core::intrinsics::discriminant_value(other);
__self_discr == __arg1_discr &&
match (self, other) {
(StatementCacheKey::Type(__self_0),
StatementCacheKey::Type(__arg1_0)) => __self_0 == __arg1_0,
(StatementCacheKey::Sql { sql: __self_0, bind_types: __self_1
}, StatementCacheKey::Sql {
sql: __arg1_0, bind_types: __arg1_1 }) =>
__self_0 == __arg1_0 && __self_1 == __arg1_1,
_ => unsafe { ::core::intrinsics::unreachable() }
}
}
}PartialEq, #[automatically_derived]
#[allow(missing_debug_implementations, unreachable_pub)]
impl<DB: ::core::cmp::Eq + Backend> ::core::cmp::Eq for StatementCacheKey<DB>
where DB::TypeMetadata: ::core::cmp::Eq {
#[inline]
#[doc(hidden)]
#[coverage(off)]
fn assert_receiver_is_total_eq(&self) {
let _: ::core::cmp::AssertParamIsEq<TypeId>;
let _: ::core::cmp::AssertParamIsEq<String>;
let _: ::core::cmp::AssertParamIsEq<Vec<DB::TypeMetadata>>;
}
}Eq)]
509#[cfg_attr(
510 diesel_docsrs,
511 doc(cfg(feature = "i-implement-a-third-party-backend-and-opt-into-breaking-changes"))
512)]
513pub enum StatementCacheKey<DB: Backend> {
514/// Represents a at compile time known query
515 ///
516 /// Calculated via [`QueryId::QueryId`]
517Type(TypeId),
518/// Represents a dynamically constructed query
519 ///
520 /// This variant is used if [`QueryId::HAS_STATIC_QUERY_ID`]
521 /// is `false` and [`AstPass::unsafe_to_cache_prepared`] is not
522 /// called for a given query.
523Sql {
524/// contains the sql query string
525sql: String,
526/// contains the types of any bind parameter passed to the query
527bind_types: Vec<DB::TypeMetadata>,
528 },
529}
530531impl<DB> StatementCacheKey<DB>
532where
533DB: Backend,
534 DB::QueryBuilder: Default,
535 DB::TypeMetadata: Clone,
536{
537/// Create a new statement cache key for the given query source
538// Note: Intentionally monomorphic over source.
539#[allow(unreachable_pub)]
540pub fn for_source(
541 maybe_type_id: Option<TypeId>,
542 source: &dyn QueryFragmentForCachedStatement<DB>,
543 bind_types: &[DB::TypeMetadata],
544 backend: &DB,
545 ) -> QueryResult<Self> {
546match maybe_type_id {
547Some(id) => Ok(StatementCacheKey::Type(id)),
548None => {
549let sql = source.construct_sql(backend)?;
550Ok(StatementCacheKey::Sql {
551sql,
552 bind_types: bind_types.into(),
553 })
554 }
555 }
556 }
557558/// Get the sql for a given query source based
559 ///
560 /// This is an optimization that may skip constructing the query string
561 /// twice if it's already part of the current cache key
562// Note: Intentionally monomorphic over source.
563#[allow(unreachable_pub)]
564pub fn sql(
565&self,
566 source: &dyn QueryFragmentForCachedStatement<DB>,
567 backend: &DB,
568 ) -> QueryResult<Cow<'_, str>> {
569match *self {
570 StatementCacheKey::Type(_) => source.construct_sql(backend).map(Cow::Owned),
571 StatementCacheKey::Sql { ref sql, .. } => Ok(Cow::Borrowed(sql)),
572 }
573 }
574}