# Trampoline payments
## See also
### - [[Multi-frame Sphinx Onion Format]]
## [Bitcoin Optech page](https://bitcoinops.org/en/topics/trampoline-payments/)
*Trampoline payments are a proposed type of payment where the spender routes the payment to an intermediate node who can select the rest of the path to the final receiver.*
*Using a single trampoline node necessarily reveals the destination to it. To regain privacy, a spender may require a payment be routed through multiple trampoline nodes so that none of them knows whether they’re routing the payment to the final receiver or just another intermediate trampoline node.*
*Although allowing trampoline nodes to select part of the path likely requires paying more routing fees, it means the spender doesn’t need to know how to route payments to any arbitrary node—it’s sufficient for the spender to know how to route a payment to any trampoline-compatible node. This is advantageous for lightweight LN clients that aren’t able to track the full network graph because they’re often offline or run on underpowered mobile hardware.*
### Primary documentation listed by Bitcoin Optech:
#### [[Lightning-dev] Outsourcing route computation with trampoline payments](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-March/001939.html)
I think we can use the upcoming "[[Multi-frame Sphinx Onion Format]]" [1]
to trustlessly outsource the computation of payment routes.
A sends a payment to an intermediate node N, and in the onion payload,
A provides the actual destination D of the payment and the amount. N
then has to find a route to D and make a payment himself. Of course D
may be yet another intermediate node, and so on. The fact that we can
make several "trampoline hops" preserves the privacy characteristics
that we already have.
Intermediate nodes have an incentive to cooperate because they are
part of the route and will earn fees. As a nice side effect, it also
creates an incentive for "routing nodes" to participate in the gossip,
which they are lacking currently.
This could significantly lessen the load on (lite) sending nodes both
on the memory/bandwidth side (they would only need to know a smallish
neighborhood) and on the cpu side (intermediate nodes would run the
actual route computation).
As Christian pointed out, one downside is that fee computation would
have to be pessimistic (he also came up with the name trampoline!).
Cheers,
Pierre-Marie
## High level explanation by [[Roy Sheinfeld]]
*Trampoline nodes are simply Lightning nodes that contain the full network graph and take over the job of finding a route from payer to recipient. Instead of having to download and constantly update the graph, each light client would only need a connection to one reliable trampoline node. The payment would jump from one trampoline node to the next until it arrives at the intended recipient.*
Source: [Lightning Network Routing: Privacy and Efficiency in a Positive-Sum Game](https://medium.com/breez-technology/lightning-network-routing-privacy-and-efficiency-in-a-positive-sum-game-b8e443f50247)
## [\[Lightning-dev\] Trampoline Routing](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html)
By [[Bastien Teinturier]]
I realized that trampoline routing has only been briefly described to this list (credits to cdecker and pm47 for laying out the foundations). I just published an updated PR [1] and want to take this opportunity to present the high level view here and the parts that need a concept ACK and more feedback.
Trampoline routing is conceptually quite simple. Alice wants to send a payment to Bob, but she doesn't know a route to get there because Alice only keeps a small area of the routing table locally (Alice has a crappy phone, damn it Alice sell some satoshis and buy a real phone). However, Alice has a few trampoline nodes in her friends-of-friends and knows some trampoline nodes outside of her local area (but she doesn't know how to reach them). Alice would like to send a payment to a trampoline node she can reach and defer calculation of the rest of the route to that node.
The onion routing part is very simple now that we have variable-length onion payloads (thanks again cdecker!). Just like russian dolls, we simply put a small onion inside a big onion. And the HTLC management forwards very naturally.
It's always simpler with an example. Let's imagine that Alice can reach three trampoline nodes: T1, T2 and T3. She also knows the details of many remote trampoline nodes that she cannot reach: RT1, RT2, RT3 and RT4. Alice selects T1 and RT2 to use as trampoline hops. She builds a small onion that describes the following route:
*Alice -> T1 -> RT2 -> Bob*
She finds a route to T1 and builds a normal onion to send a payment to T1:
*Alice -> N1 -> N2 -> T1*
In the payload for T1, Alice puts the small trampoline onion. When T1 receives the payment, he is able to peel one layer of the trampoline onion and discover that he must forward the payment to RT2. T1 finds a route to RT2 and builds a normal onion to send a payment to RT2:
*T1 -> N3 -> RT2*
In the payload for RT2, T1 puts the peeled small trampoline onion. When RT2 receives the payment, he is able to peel one layer of the trampoline onion and discover that he must forward the payment to Bob. RT2 finds a route to Bob and builds a normal onion to send a payment:
*RT2 -> N4 -> N5 -> Bob*
In the payload for Bob, RT2 puts the peeled small trampoline onion. When Bob receives the payment, he is able to peel the last layer of the trampoline onion and discover that he is the final recipient, and fulfills the payment.
Alice has successfully sent a payment to Bob deferring route calculation to some chosen trampoline nodes. That part was simple and (hopefully) not controversial, but it left out some important details:
1. How do trampoline nodes specify their fees and cltv requirements?
2. How does Alice sync the fees and cltv requirements for her remote trampoline nodes?
To answer 1., trampoline nodes needs to estimate a fee and cltv that allows them to route to (almost) any other trampoline node. This is likely going to increase the fees paid by end-users, but they can't eat their cake and have it too: by not syncing the whole network, users are trading fees for ease of use and payment reliability.
To answer 2., we can re-use the existing gossip infrastructure to exchange a new *node_update *message that contains the trampoline fees and cltv. However Alice doesn't want to receive every network update because she doesn't have the bandwidth to support it (damn it again Alice, upgrade your mobile plan). My suggestion is to create a filter system (similiar to BIP37) where Alice sends gossip filters to her peers, and peers only forward to Alice updates that match these filters. This doesn't have the issues BIP37 has for Bitcoin because it has a cost for Alice: she has to open a channel (and thus lock funds) to get a connection to a peer. Peers can refuse to serve filters if they are too expensive to compute, but the filters I propose in the PR are very cheap (a simple xor or a node distance comparison).
If you're interested in the technical details, head over to [1] (*([[Trampoline payments#Initial BOLT Proposal Closed BOLTS 654 https github com lightning bolts pull 654|Initial BOLT proposal]])*). I would really like to get feedback from this list on the concept itself, and especially on the gossip and fee estimation parts. If you made it that far, I'm sure you have many questions and suggestions ;).
## BOLT Proposals
### Initial BOLT Proposal (Closed): [BOLTS#654](https://github.com/lightning/bolts/pull/654)
#### Criticism from [[Matt Corallo]]:
Personally I think moving in this direction is a bad idea for two reasons:
1. One of the key pitches for lightning is its privacy properties - properties which today are somewhat weak, but which can be enabled only with sufficient route graph data to select a sufficiently long, and diverse path. Just look at the motivation section here for why this is going to even further erode practical lightning privacy - folks can and will use this to hold a routedb that contains just a group of very large nodes and use all of their routing dbs, probably losing a lot of privacy. This is a cost for everyone, not just the users of unfortunate implementations that do this - not only is their privacy set reduced, but lightning gains a reputation of not being private, and moves Lightning towards utter crap like Interledger instead of a well-designed user-first system.
2. At a high level, this seems like premature optimization. I'm highly dubious that the current routedb can't simply be (and shouldn't simply be) pruned by learning which nodes are regularly offline/fail to route regularly and have this be a viable routing table. If churn in the routedb is an issue for you, stop fetching it all the time and use AMP to send a payment simultaneously along multiple paths, retrying the missing paths down a successful path to ensure quick payment success. If initial fetch of the routedb is an issue for you, a) fetch well-connected subsets from redundant central servers on startup and b) work on schnorr signatures in the channel announcements to reduce the number of sigchecks and amount of bandwith used. Don't jump to three steps back in the privacy guarantees lightning provides until you've exhausted low-hanging fruit to get towards a similar fetch time.
### High-level description (2021 edition): [BOLTS#829](https://github.com/lightning/bolts/pull/829) (open)
This proposal allows nodes running on constrained devices to sync only a small portion of the network and leverage trampoline nodes to calculate the missing parts of the payment route while providing the same privacy as fully source-routed payments.
The main idea is to use layered onions: a normal onion contains a smaller onion for the last hop of the route, and that smaller onion contains routing information about the next trampoline hop.
This PR provides a high-level view of trampoline routing, where concepts and designs are presented in a more user-friendly format than formal spec work. This document lets reviewers see the big picture and how all the pieces work together. It also contains pretty detailed examples that should give reviewers some intuition about the subtle low-level details.
Then reviewers can move on to [#836](https://github.com/lightning/bolts/pull/836) which contains the usual spec format for the onion construction: this is where we'll work on the nitty-gritty details.
This PR supercedes [#654](https://github.com/lightning/bolts/pull/654) based on what we learnt after 1 year running trampoline in production in [Phoenix](https://phoenix.acinq.co/) and many discussions with [@ecdsa](https://github.com/ecdsa) while Electrum worked on their own trampoline implementation. The important changes are:
- the trampoline onion is now variable-size: it's much more flexible and has no privacy downside since it's not observable at the network layer (which is the reason why the outer onion is constant size)
- trampoline doesn't need any new gossip mechanism and instead relies on the recipient doing a small amount of work to include trampoline hints in invoices
### Trampoline onion format: [BOLTS#836](https://github.com/lightning/bolts/pull/836) (open)
Trampoline routing uses layered onions to trustlessly and privately offload the calculation of parts of a payment route to remote trampoline nodes.
A normal onion contains a smaller onion for the last hop of the route, and that smaller onion contains routing information about the next trampoline hop.
Intermediate trampoline nodes "fill the gap" by finding a route to the next trampoline node, and sending it the peeled trampoline onion, until that reaches the final destination.
This PR details the onion construction and requirements for supporting nodes. I advise readers to also have a look at [#829](https://github.com/lightning/bolts/pull/829) which gives a more high-level view of the different components, how they interact, and provides nice diagrams that help understand the low-level details.