Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems (English, Paperback, Bubeck Sebastien)

Share

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems  (English, Paperback, Bubeck Sebastien)

Be the first to Review this product
₹9,546
11,753
18% off
i
Available offers
  • Bank Offer100% Cashback upto 500Rs on Axis Bank SuperMoney Rupay CC UPI transactions on super.money UPI
    T&C
  • Bank Offer5% cashback on Flipkart Axis Bank Credit Card upto ₹4,000 per statement quarter
    T&C
  • Delivery
    Check
    Enter pincode
      Delivery by24 Jun, Tuesday
      ?
    View Details
    Author
    Read More
    Highlights
    • Language: English
    • Binding: Paperback
    • Publisher: now publishers Inc
    • Genre: Computers
    • ISBN: 9781601986269, 9781601986269
    • Pages: 138
    Services
    • Cash on Delivery available
      ?
    Seller
    RBODBooks
    3.7
    • 7 Days Replacement Policy
      ?
  • See other sellers
  • Description
    A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a ""one-armed bandit"" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a ""multi-armed bandit""), and must repeatedly choose where to insert the next coin. Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems.
    Read More
    Specifications
    Book Details
    Imprint
    • now publishers Inc
    Dimensions
    Width
    • 8 mm
    Height
    • 234 mm
    Length
    • 156 mm
    Weight
    • 205 gr
    Be the first to ask about this product
    Safe and Secure Payments.Easy returns.100% Authentic products.
    You might be interested in
    Psychology Books
    Min. 50% Off
    Shop Now
    Medical And Nursing Books
    Min. 50% Off
    Shop Now
    Language And Linguistic Books
    Min. 50% Off
    Shop Now
    Other Self-Help Books
    Min. 50% Off
    Shop Now
    Back to top