go/src/google.golang.org: Update cloud and grpc

In particular, pick up this change:

https://code.googlesource.com/gocloud/+/b9ea8bd96fba40f7bca8062c958e7fe7087625bf

  bigtable/bttest: Fix race between GC and row mutations.

  Deadlock happens when gc is triggered in parallel with mutating the rows.
  Problem is in different order of table/row locking in
  server.MutateRow (and other funcs like that) and in table.gc().

  Reviewed-on: https://code-review.googlesource.com/4871
  Reviewed-by: David Symonds <dsymonds@golang.org>
  Reviewed-by: Dave Day <djd@golang.org>

Also added new dependencies:
github.com/googleapis/{gax-go,proto-client-go}

Change-Id: Ifef82ae43f439d76e2d9bf6285f1e25843710efa
diff --git a/go/src/github.com/googleapis/gax-go/.travis.yml b/go/src/github.com/googleapis/gax-go/.travis.yml
new file mode 100644
index 0000000..a0443a4
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/.travis.yml
@@ -0,0 +1,15 @@
+sudo: false
+language: go
+go:
+  - 1.5
+  - 1.6
+before_install:
+  - go get golang.org/x/tools/cmd/cover
+  - go get golang.org/x/tools/cmd/goimports
+script:
+  - gofmt -l .
+  - goimports -l .
+  - go tool vet .
+  - go test -coverprofile=coverage.txt -covermode=atomic
+after_success:
+  - bash <(curl -s https://codecov.io/bash)
diff --git a/go/src/github.com/googleapis/gax-go/CONTRIBUTING.md b/go/src/github.com/googleapis/gax-go/CONTRIBUTING.md
new file mode 100644
index 0000000..2827b7d
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/CONTRIBUTING.md
@@ -0,0 +1,27 @@
+Want to contribute? Great! First, read this page (including the small print at the end).
+
+### Before you contribute
+Before we can use your code, you must sign the
+[Google Individual Contributor License Agreement]
+(https://cla.developers.google.com/about/google-individual)
+(CLA), which you can do online. The CLA is necessary mainly because you own the
+copyright to your changes, even after your contribution becomes part of our
+codebase, so we need your permission to use and distribute your code. We also
+need to be sure of various other things—for instance that you'll tell us if you
+know that your code infringes on other people's patents. You don't have to sign
+the CLA until after you've submitted your code for review and a member has
+approved it, but you must do it before we can put your code into our codebase.
+Before you start working on a larger contribution, you should get in touch with
+us first through the issue tracker with your idea so that we can help out and
+possibly guide you. Coordinating up front makes it much easier to avoid
+frustration later on.
+
+### Code reviews
+All submissions, including submissions by project members, require review. We
+use Github pull requests for this purpose.
+
+### The small print
+Contributions made by corporations are covered by a different agreement than
+the one above, the
+[Software Grant and Corporate Contributor License Agreement]
+(https://cla.developers.google.com/about/google-corporate).
diff --git a/go/src/github.com/googleapis/gax-go/LICENSE b/go/src/github.com/googleapis/gax-go/LICENSE
new file mode 100644
index 0000000..6d16b65
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/LICENSE
@@ -0,0 +1,27 @@
+Copyright 2016, Google Inc.
+All rights reserved.
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/go/src/github.com/googleapis/gax-go/README.google b/go/src/github.com/googleapis/gax-go/README.google
new file mode 100644
index 0000000..2aca049
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/README.google
@@ -0,0 +1,10 @@
+URL: https://github.com/googleapis/gax-go/archive/1452cb091e2ba6d26f2874fea515beeac5f641b6.zip
+Version: 1452cb091e2ba6d26f2874fea515beeac5f641b6
+License: New BSD
+License File: LICENSE
+
+Description:
+Google API Extensions for Go
+
+Local Modifications:
+No modifications.
diff --git a/go/src/github.com/googleapis/gax-go/README.md b/go/src/github.com/googleapis/gax-go/README.md
new file mode 100644
index 0000000..8cb1b9c
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/README.md
@@ -0,0 +1,9 @@
+Google API Extensions for Go
+============================
+
+[![Build Status](https://travis-ci.org/googleapis/gax-golang.svg?branch=master)](https://travis-ci.org/googleapis/gax-golang)
+[![Code Coverage](https://img.shields.io/codecov/c/github/googleapis/gax-golang.svg)](https://codecov.io/github/googleapis/gax-golang)
+
+Google API Extensions for Go (gax-golang) is a set of modules which aids the
+development of APIs for clients and servers based on `gRPC` and Google API
+conventions.
diff --git a/go/src/github.com/googleapis/gax-go/call_option.go b/go/src/github.com/googleapis/gax-go/call_option.go
new file mode 100644
index 0000000..d605132
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/call_option.go
@@ -0,0 +1,118 @@
+package gax
+
+import (
+	"time"
+
+	"google.golang.org/grpc/codes"
+)
+
+type CallOption interface {
+	Resolve(*CallSettings)
+}
+
+type callOptions []CallOption
+
+func (opts callOptions) Resolve(s *CallSettings) *CallSettings {
+	for _, opt := range opts {
+		opt.Resolve(s)
+	}
+	return s
+}
+
+// Encapsulates the call settings for a particular API call.
+type CallSettings struct {
+	Timeout       time.Duration
+	RetrySettings RetrySettings
+}
+
+// Per-call configurable settings for retrying upon transient failure.
+type RetrySettings struct {
+	RetryCodes      map[codes.Code]bool
+	BackoffSettings BackoffSettings
+}
+
+// Parameters to the exponential backoff algorithm for retrying.
+type BackoffSettings struct {
+	DelayTimeoutSettings MultipliableDuration
+	RPCTimeoutSettings   MultipliableDuration
+}
+
+type MultipliableDuration struct {
+	Initial    time.Duration
+	Max        time.Duration
+	Multiplier float64
+}
+
+func (w CallSettings) Resolve(s *CallSettings) {
+	s.Timeout = w.Timeout
+	s.RetrySettings = w.RetrySettings
+
+	s.RetrySettings.RetryCodes = make(map[codes.Code]bool, len(w.RetrySettings.RetryCodes))
+	for key, value := range w.RetrySettings.RetryCodes {
+		s.RetrySettings.RetryCodes[key] = value
+	}
+}
+
+type withTimeout time.Duration
+
+func (w withTimeout) Resolve(s *CallSettings) {
+	s.Timeout = time.Duration(w)
+}
+
+// WithTimeout sets the client-side timeout for API calls if the call isn't
+// retrying.
+func WithTimeout(timeout time.Duration) CallOption {
+	return withTimeout(timeout)
+}
+
+type withRetryCodes []codes.Code
+
+func (w withRetryCodes) Resolve(s *CallSettings) {
+	s.RetrySettings.RetryCodes = make(map[codes.Code]bool)
+	for _, code := range []codes.Code(w) {
+		s.RetrySettings.RetryCodes[code] = true
+	}
+}
+
+// WithRetryCodes sets a list of Google API canonical error codes upon which a
+// retry should be attempted. If nil, the call will not retry.
+func WithRetryCodes(retryCodes []codes.Code) CallOption {
+	return withRetryCodes(retryCodes)
+}
+
+type withDelayTimeoutSettings MultipliableDuration
+
+func (w withDelayTimeoutSettings) Resolve(s *CallSettings) {
+	s.RetrySettings.BackoffSettings.DelayTimeoutSettings = MultipliableDuration(w)
+}
+
+// WithDelayTimeoutSettings specifies:
+// - The initial delay time, in milliseconds, between the completion of
+//   the first failed request and the initiation of the first retrying
+//   request.
+// - The multiplier by which to increase the delay time between the
+//   completion of failed requests, and the initiation of the subsequent
+//   retrying request.
+// - The maximum delay time, in milliseconds, between requests. When this
+//   value is reached, `RetryDelayMultiplier` will no longer be used to
+//   increase delay time.
+func WithDelayTimeoutSettings(initial time.Duration, max time.Duration, multiplier float64) CallOption {
+	return withDelayTimeoutSettings(MultipliableDuration{initial, max, multiplier})
+}
+
+type withRPCTimeoutSettings MultipliableDuration
+
+func (w withRPCTimeoutSettings) Resolve(s *CallSettings) {
+	s.RetrySettings.BackoffSettings.RPCTimeoutSettings = MultipliableDuration(w)
+}
+
+// WithRPCTimeoutSettings specifies:
+// - The initial timeout parameter to the request.
+// - The multiplier by which to increase the timeout parameter between
+//   failed requests.
+// - The maximum timeout parameter, in milliseconds, for a request. When
+//   this value is reached, `RPCTimeoutMultiplier` will no longer be used
+//   to increase the timeout.
+func WithRPCTimeoutSettings(initial time.Duration, max time.Duration, multiplier float64) CallOption {
+	return withRPCTimeoutSettings(MultipliableDuration{initial, max, multiplier})
+}
diff --git a/go/src/github.com/googleapis/gax-go/call_option_test.go b/go/src/github.com/googleapis/gax-go/call_option_test.go
new file mode 100644
index 0000000..ce07d9f
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/call_option_test.go
@@ -0,0 +1,47 @@
+package gax
+
+import (
+	"reflect"
+	"testing"
+	"time"
+
+	"google.golang.org/grpc/codes"
+)
+
+func TestCallOptions(t *testing.T) {
+	expected := &CallSettings{
+		time.Second * 1,
+		RetrySettings{
+			map[codes.Code]bool{codes.Unavailable: true, codes.DeadlineExceeded: true},
+			BackoffSettings{
+				MultipliableDuration{time.Second * 2, time.Second * 4, 3.0},
+				MultipliableDuration{time.Second * 5, time.Second * 7, 6.0},
+			},
+		},
+	}
+
+	settings := &CallSettings{}
+	opts := []CallOption{
+		WithTimeout(time.Second * 1),
+		WithRetryCodes([]codes.Code{codes.Unavailable, codes.DeadlineExceeded}),
+		WithDelayTimeoutSettings(time.Second*2, time.Second*4, 3.0),
+		WithRPCTimeoutSettings(time.Second*5, time.Second*7, 6.0),
+	}
+	callOptions(opts).Resolve(settings)
+
+	if !reflect.DeepEqual(settings, expected) {
+		t.Errorf("piece-by-piece settings don't match their expected configuration")
+	}
+
+	settings = &CallSettings{}
+	expected.Resolve(settings)
+
+	if !reflect.DeepEqual(settings, expected) {
+		t.Errorf("whole settings don't match their expected configuration")
+	}
+
+	expected.RetrySettings.RetryCodes[codes.FailedPrecondition] = true
+	if _, ok := settings.RetrySettings.RetryCodes[codes.FailedPrecondition]; ok {
+		t.Errorf("unexpected modification in the RetryCodes map")
+	}
+}
diff --git a/go/src/github.com/googleapis/gax-go/client_option.go b/go/src/github.com/googleapis/gax-go/client_option.go
new file mode 100644
index 0000000..531a59c
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/client_option.go
@@ -0,0 +1,99 @@
+package gax
+
+import (
+	"google.golang.org/grpc"
+)
+
+type ClientOption interface {
+	Resolve(*ClientSettings)
+}
+
+type clientOptions []ClientOption
+
+func (opts clientOptions) Resolve(s *ClientSettings) *ClientSettings {
+	for _, opt := range opts {
+		opt.Resolve(s)
+	}
+	return s
+}
+
+type ClientSettings struct {
+	AppName     string
+	AppVersion  string
+	Endpoint    string
+	Scopes      []string
+	CallOptions map[string][]CallOption
+	DialOptions []grpc.DialOption
+}
+
+func (w ClientSettings) Resolve(s *ClientSettings) {
+	s.AppName = w.AppName
+	s.AppVersion = w.AppVersion
+	s.Endpoint = w.Endpoint
+	WithScopes(w.Scopes...).Resolve(s)
+	WithCallOptions(w.CallOptions).Resolve(s)
+	WithDialOptions(w.DialOptions...).Resolve(s)
+}
+
+type withAppName string
+
+func (w withAppName) Resolve(s *ClientSettings) {
+	s.AppName = string(w)
+}
+
+func WithAppName(appName string) ClientOption {
+	return withAppName(appName)
+}
+
+type withAppVersion string
+
+func (w withAppVersion) Resolve(s *ClientSettings) {
+	s.AppVersion = string(w)
+}
+
+func WithAppVersion(appVersion string) ClientOption {
+	return withAppVersion(appVersion)
+}
+
+type withEndpoint string
+
+func (w withEndpoint) Resolve(s *ClientSettings) {
+	s.Endpoint = string(w)
+}
+
+func WithEndpoint(endpoint string) ClientOption {
+	return withEndpoint(endpoint)
+}
+
+type withScopes []string
+
+func (w withScopes) Resolve(s *ClientSettings) {
+	s.Scopes = append([]string{}, w...)
+}
+
+func WithScopes(scopes ...string) ClientOption {
+	return withScopes(scopes)
+}
+
+type withCallOptions map[string][]CallOption
+
+func (w withCallOptions) Resolve(s *ClientSettings) {
+	s.CallOptions = make(map[string][]CallOption, len(w))
+	for key, value := range w {
+		s.CallOptions[key] = value
+	}
+}
+
+func WithCallOptions(callOptions map[string][]CallOption) ClientOption {
+	return withCallOptions(callOptions)
+}
+
+type withDialOptions []grpc.DialOption
+
+func (w withDialOptions) Resolve(s *ClientSettings) {
+	s.DialOptions = append([]grpc.DialOption{}, w...)
+}
+
+func WithDialOptions(opts ...grpc.DialOption) ClientOption {
+	return withDialOptions(opts)
+}
diff --git a/go/src/github.com/googleapis/gax-go/client_option_test.go b/go/src/github.com/googleapis/gax-go/client_option_test.go
new file mode 100644
index 0000000..7862f54
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/client_option_test.go
@@ -0,0 +1,51 @@
+package gax
+
+import (
+	"reflect"
+	"testing"
+	"time"
+
+	"google.golang.org/grpc"
+)
+
+func TestClientOptionsPieceByPiece(t *testing.T) {
+	expected := &ClientSettings{
+		"myapi",
+		"v0.1.0",
+		"https://example.com:443",
+		[]string{"https://example.com/auth/helloworld", "https://example.com/auth/otherthing"},
+		map[string][]CallOption{"ListWorlds": []CallOption{WithTimeout(3 * time.Second)}},
+		[]grpc.DialOption{},
+	}
+
+	settings := &ClientSettings{}
+	opts := []ClientOption{
+		WithAppName("myapi"),
+		WithAppVersion("v0.1.0"),
+		WithEndpoint("https://example.com:443"),
+		WithScopes("https://example.com/auth/helloworld", "https://example.com/auth/otherthing"),
+		WithCallOptions(map[string][]CallOption{"ListWorlds": []CallOption{WithTimeout(3 * time.Second)}}),
+		WithDialOptions(), // Can't compare function signatures for equality.
+	}
+	clientOptions(opts).Resolve(settings)
+
+	if !reflect.DeepEqual(settings, expected) {
+		t.Errorf("piece-by-piece settings don't match their expected configuration")
+	}
+
+	settings = &ClientSettings{}
+	expected.Resolve(settings)
+
+	if !reflect.DeepEqual(settings, expected) {
+		t.Errorf("whole settings don't match their expected configuration")
+	}
+
+	expected.Scopes[0] = "hello"
+	if settings.Scopes[0] == expected.Scopes[0] {
+		t.Errorf("unexpected modification in Scopes array")
+	}
+	expected.CallOptions["Impossible"] = []CallOption{WithTimeout(42 * time.Second)}
+	if _, ok := settings.CallOptions["Impossible"]; ok {
+		t.Errorf("unexpected modification in CallOptions map")
+	}
+}
diff --git a/go/src/github.com/googleapis/gax-go/dial.go b/go/src/github.com/googleapis/gax-go/dial.go
new file mode 100644
index 0000000..abdb300
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/dial.go
@@ -0,0 +1,29 @@
+package gax
+
+import (
+	"fmt"
+
+	"golang.org/x/net/context"
+	"golang.org/x/oauth2/google"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/credentials"
+	"google.golang.org/grpc/credentials/oauth"
+)
+
+func DialGRPC(ctx context.Context, opts ...ClientOption) (*grpc.ClientConn, error) {
+	settings := &ClientSettings{}
+	clientOptions(opts).Resolve(settings)
+
+	var dialOpts = settings.DialOptions
+	if len(dialOpts) == 0 {
+		tokenSource, err := google.DefaultTokenSource(ctx, settings.Scopes...)
+		if err != nil {
+			return nil, fmt.Errorf("google.DefaultTokenSource: %v", err)
+		}
+		dialOpts = []grpc.DialOption{
+			grpc.WithPerRPCCredentials(oauth.TokenSource{TokenSource: tokenSource}),
+			grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")),
+		}
+	}
+	return grpc.Dial(settings.Endpoint, dialOpts...)
+}
diff --git a/go/src/github.com/googleapis/gax-go/gax.go b/go/src/github.com/googleapis/gax-go/gax.go
new file mode 100644
index 0000000..ff68a7c
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/gax.go
@@ -0,0 +1,3 @@
+package gax
+
+const Version = "0.1.0"
diff --git a/go/src/github.com/googleapis/gax-go/invoke.go b/go/src/github.com/googleapis/gax-go/invoke.go
new file mode 100644
index 0000000..5a4d833
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/invoke.go
@@ -0,0 +1,69 @@
+package gax
+
+import (
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+)
+
+// A user defined call stub.
+type APICall func(context.Context) error
+
+// scaleDuration returns the product of a and mult.
+func scaleDuration(a time.Duration, mult float64) time.Duration {
+	ns := float64(a) * mult
+	return time.Duration(ns)
+}
+
+// ensureTimeout returns a context with the given timeout applied if there
+// is no deadline on the context.
+func ensureTimeout(ctx context.Context, timeout time.Duration) context.Context {
+	if _, ok := ctx.Deadline(); !ok {
+		ctx, _ = context.WithTimeout(ctx, timeout)
+	}
+	return ctx
+}
+
+// invokeWithRetry calls stub using an exponential backoff retry mechanism
+// based on the values provided in retrySettings.
+func invokeWithRetry(ctx context.Context, stub APICall, callSettings CallSettings) error {
+	retrySettings := callSettings.RetrySettings
+	backoffSettings := callSettings.RetrySettings.BackoffSettings
+	delay := backoffSettings.DelayTimeoutSettings.Initial
+	timeout := backoffSettings.RPCTimeoutSettings.Initial
+	for {
+		// If the deadline is exceeded...
+		if ctx.Err() != nil {
+			return ctx.Err()
+		}
+		timeoutCtx, _ := context.WithTimeout(ctx, backoffSettings.RPCTimeoutSettings.Max)
+		timeoutCtx, _ = context.WithTimeout(timeoutCtx, timeout)
+		err := stub(timeoutCtx)
+		code := grpc.Code(err)
+		if code == codes.OK {
+			return nil
+		}
+		if !retrySettings.RetryCodes[code] {
+			return err
+		}
+		delayCtx, _ := context.WithTimeout(ctx, backoffSettings.DelayTimeoutSettings.Max)
+		delayCtx, _ = context.WithTimeout(delayCtx, delay)
+		<-delayCtx.Done()
+
+		delay = scaleDuration(delay, backoffSettings.DelayTimeoutSettings.Multiplier)
+		timeout = scaleDuration(timeout, backoffSettings.RPCTimeoutSettings.Multiplier)
+	}
+}
+
+// Invoke calls stub with a child of context modified by the specified options.
+func Invoke(ctx context.Context, stub APICall, opts ...CallOption) error {
+	settings := &CallSettings{}
+	callOptions(opts).Resolve(settings)
+	ctx = ensureTimeout(ctx, settings.Timeout)
+	if len(settings.RetrySettings.RetryCodes) > 0 {
+		return invokeWithRetry(ctx, stub, *settings)
+	}
+	return stub(ctx)
+}
diff --git a/go/src/github.com/googleapis/gax-go/invoke_test.go b/go/src/github.com/googleapis/gax-go/invoke_test.go
new file mode 100644
index 0000000..bdc7dab
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/invoke_test.go
@@ -0,0 +1,104 @@
+package gax
+
+import (
+	"testing"
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+)
+
+var (
+	testCallSettings = []CallOption{
+		WithRetryCodes([]codes.Code{codes.Unavailable, codes.DeadlineExceeded}),
+		// initial, max, multiplier
+		WithDelayTimeoutSettings(100*time.Millisecond, 300*time.Millisecond, 1.5),
+		WithRPCTimeoutSettings(50*time.Millisecond, 500*time.Millisecond, 3.0),
+		WithTimeout(1000 * time.Millisecond),
+	}
+)
+
+func TestInvokeWithContextTimeout(t *testing.T) {
+	ctx := context.Background()
+	deadline := time.Now().Add(42 * time.Second)
+	ctx, _ = context.WithDeadline(ctx, deadline)
+	Invoke(ctx, func(childCtx context.Context) error {
+		d, ok := childCtx.Deadline()
+		if !ok || d != deadline {
+			t.Errorf("expected call to have original timeout")
+		}
+		return nil
+	}, WithTimeout(1000*time.Millisecond))
+}
+
+func TestInvokeWithTimeout(t *testing.T) {
+	ctx := context.Background()
+	var ok bool
+	Invoke(ctx, func(childCtx context.Context) error {
+		_, ok = childCtx.Deadline()
+		return nil
+	}, WithTimeout(1000*time.Millisecond))
+	if !ok {
+		t.Errorf("expected call to have an assigned timeout")
+	}
+}
+
+func TestInvokeWithOKResponseWithTimeout(t *testing.T) {
+	ctx := context.Background()
+	var resp int
+	err := Invoke(ctx, func(childCtx context.Context) error {
+		resp = 42
+		return nil
+	}, WithTimeout(1000*time.Millisecond))
+	if resp != 42 || err != nil {
+		t.Errorf("expected call to return nil and set resp to 42")
+	}
+}
+
+func TestInvokeWithDeadlineAfterRetries(t *testing.T) {
+	ctx := context.Background()
+	count := 0
+
+	now := time.Now()
+	expectedTimeout := []time.Duration{
+		0,
+		150 * time.Millisecond,
+		450 * time.Millisecond,
+	}
+
+	err := Invoke(ctx, func(childCtx context.Context) error {
+		t.Log("delta:", time.Now().Sub(now.Add(expectedTimeout[count])))
+		if !time.Now().After(now.Add(expectedTimeout[count])) {
+			t.Errorf("expected %s to pass before this call", expectedTimeout[count])
+		}
+		count += 1
+		<-childCtx.Done()
+		// Workaround for `go vet`: https://github.com/grpc/grpc-go/issues/90
+		errf := grpc.Errorf
+		return errf(codes.DeadlineExceeded, "")
+	}, testCallSettings...)
+	if count != 3 || err == nil {
+		t.Errorf("expected call to retry 3 times and return an error")
+	}
+}
+
+func TestInvokeWithOKResponseAfterRetries(t *testing.T) {
+	ctx := context.Background()
+	count := 0
+
+	var resp int
+	err := Invoke(ctx, func(childCtx context.Context) error {
+		count += 1
+		if count == 3 {
+			resp = 42
+			return nil
+		}
+		<-childCtx.Done()
+		errf := grpc.Errorf
+		return errf(codes.DeadlineExceeded, "")
+	}, testCallSettings...)
+	if count != 3 || resp != 42 || err != nil {
+		t.Errorf("expected call to retry 3 times, return nil, and set resp to 42")
+	}
+}
diff --git a/go/src/github.com/googleapis/gax-go/path_template.go b/go/src/github.com/googleapis/gax-go/path_template.go
new file mode 100644
index 0000000..41bda94
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/path_template.go
@@ -0,0 +1,176 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package gax
+
+import (
+	"errors"
+	"fmt"
+	"strings"
+)
+
+type matcher interface {
+	match([]string) (int, error)
+	String() string
+}
+
+type segment struct {
+	matcher
+	name string
+}
+
+type labelMatcher string
+
+func (ls labelMatcher) match(segments []string) (int, error) {
+	if len(segments) == 0 {
+		return 0, fmt.Errorf("expected %s but no more segments found", ls)
+	}
+	if segments[0] != string(ls) {
+		return 0, fmt.Errorf("expected %s but got %s", ls, segments[0])
+	}
+	return 1, nil
+}
+
+func (ls labelMatcher) String() string {
+	return string(ls)
+}
+
+type wildcardMatcher int
+
+func (wm wildcardMatcher) match(segments []string) (int, error) {
+	if len(segments) == 0 {
+		return 0, errors.New("no more segments found")
+	}
+	return 1, nil
+}
+
+func (wm wildcardMatcher) String() string {
+	return "*"
+}
+
+type pathWildcardMatcher int
+
+func (pwm pathWildcardMatcher) match(segments []string) (int, error) {
+	length := len(segments) - int(pwm)
+	if length <= 0 {
+		return 0, errors.New("not sufficient segments are supplied for path wildcard")
+	}
+	return length, nil
+}
+
+func (pwm pathWildcardMatcher) String() string {
+	return "**"
+}
+
+type ParseError struct {
+	Pos      int
+	Template string
+	Message  string
+}
+
+func (pe ParseError) Error() string {
+	return fmt.Sprintf("at %d of template '%s', %s", pe.Pos, pe.Template, pe.Message)
+}
+
+// PathTemplate manages the template to build and match with paths used
+// by API services. It holds a template and variable names in it, and
+// it can extract matched patterns from a path string or build a path
+// string from a binding.
+//
+// See http.proto in github.com/googleapis/googleapis/ for the details of
+// the template syntax.
+type PathTemplate struct {
+	segments []segment
+}
+
+// NewPathTemplate parses a path template, and returns a PathTemplate
+// instance if successful.
+func NewPathTemplate(template string) (*PathTemplate, error) {
+	return parsePathTemplate(template)
+}
+
+// MustCompilePathTemplate is like NewPathTemplate but panics if the
+// expression cannot be parsed. It simplifies safe initialization of
+// global variables holding compiled regular expressions.
+func MustCompilePathTemplate(template string) *PathTemplate {
+	pt, err := NewPathTemplate(template)
+	if err != nil {
+		panic(err)
+	}
+	return pt
+}
+
+// Match attempts to match the given path with the template, and returns
+// the mapping of the variable name to the matched pattern string.
+func (pt *PathTemplate) Match(path string) (map[string]string, error) {
+	paths := strings.Split(path, "/")
+	values := map[string]string{}
+	for _, segment := range pt.segments {
+		length, err := segment.match(paths)
+		if err != nil {
+			return nil, err
+		}
+		if segment.name != "" {
+			value := strings.Join(paths[:length], "/")
+			if oldValue, ok := values[segment.name]; ok {
+				values[segment.name] = oldValue + "/" + value
+			} else {
+				values[segment.name] = value
+			}
+		}
+		paths = paths[length:]
+	}
+	if len(paths) != 0 {
+		return nil, fmt.Errorf("Trailing path %s remains after the matching", strings.Join(paths, "/"))
+	}
+	return values, nil
+}
+
+// Render creates a path string from its template and the binding from
+// the variable name to the value.
+func (pt *PathTemplate) Render(binding map[string]string) (string, error) {
+	result := make([]string, 0, len(pt.segments))
+	var lastVariableName string
+	for _, segment := range pt.segments {
+		name := segment.name
+		if lastVariableName != "" && name == lastVariableName {
+			continue
+		}
+		lastVariableName = name
+		if name == "" {
+			result = append(result, segment.String())
+		} else if value, ok := binding[name]; ok {
+			result = append(result, value)
+		} else {
+			return "", fmt.Errorf("%s is not found", name)
+		}
+	}
+	built := strings.Join(result, "/")
+	return built, nil
+}
diff --git a/go/src/github.com/googleapis/gax-go/path_template_parser.go b/go/src/github.com/googleapis/gax-go/path_template_parser.go
new file mode 100644
index 0000000..f49a571
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/path_template_parser.go
@@ -0,0 +1,198 @@
+package gax
+
+import (
+	"fmt"
+	"io"
+	"strings"
+)
+
+// This parser follows the syntax of path templates, from
+// https://github.com/googleapis/googleapis/blob/master/google/api/http.proto.
+// The differences are that there is no custom verb, we allow the initial slash
+// to be absent, and that we are not strict as
+// https://tools.ietf.org/html/rfc6570 about the characters in identifiers and
+// literals.
+
+type pathTemplateParser struct {
+	r                *strings.Reader
+	runeCount        int             // the number of the current rune in the original string
+	nextVar          int             // the number to use for the next unnamed variable
+	seenName         map[string]bool // names we've seen already
+	seenPathWildcard bool            // have we seen "**" already?
+}
+
+func parsePathTemplate(template string) (pt *PathTemplate, err error) {
+	p := &pathTemplateParser{
+		r:        strings.NewReader(template),
+		seenName: map[string]bool{},
+	}
+
+	// Handle panics with strings like errors.
+	// See pathTemplateParser.error, below.
+	defer func() {
+		if x := recover(); x != nil {
+			errmsg, ok := x.(errString)
+			if !ok {
+				panic(x)
+			}
+			pt = nil
+			err = ParseError{p.runeCount, template, string(errmsg)}
+		}
+	}()
+
+	segs := p.template()
+	// If there is a path wildcard, set its length. We can't do this
+	// until we know how many segments we've got all together.
+	for i, seg := range segs {
+		if _, ok := seg.matcher.(pathWildcardMatcher); ok {
+			segs[i].matcher = pathWildcardMatcher(len(segs) - i - 1)
+			break
+		}
+	}
+	return &PathTemplate{segments: segs}, nil
+
+}
+
+// Used to indicate errors "thrown" by this parser. We don't use string because
+// many parts of the standard library panic with strings.
+type errString string
+
+// Terminates parsing immediately with an error.
+func (p *pathTemplateParser) error(msg string) {
+	panic(errString(msg))
+}
+
+// Template = [ "/" ] Segments
+func (p *pathTemplateParser) template() []segment {
+	var segs []segment
+	if p.consume('/') {
+		// Initial '/' needs an initial empty matcher.
+		segs = append(segs, segment{matcher: labelMatcher("")})
+	}
+	return append(segs, p.segments("")...)
+}
+
+// Segments = Segment { "/" Segment }
+func (p *pathTemplateParser) segments(name string) []segment {
+	var segs []segment
+	for {
+		subsegs := p.segment(name)
+		segs = append(segs, subsegs...)
+		if !p.consume('/') {
+			break
+		}
+	}
+	return segs
+}
+
+// Segment  = "*" | "**" | LITERAL | Variable
+func (p *pathTemplateParser) segment(name string) []segment {
+	if p.consume('*') {
+		if name == "" {
+			name = fmt.Sprintf("$%d", p.nextVar)
+			p.nextVar++
+		}
+		if p.consume('*') {
+			if p.seenPathWildcard {
+				p.error("multiple '**' disallowed")
+			}
+			p.seenPathWildcard = true
+			// We'll change 0 to the right number at the end.
+			return []segment{{name: name, matcher: pathWildcardMatcher(0)}}
+		}
+		return []segment{{name: name, matcher: wildcardMatcher(0)}}
+	}
+	if p.consume('{') {
+		if name != "" {
+			p.error("recursive named bindings are not allowed")
+		}
+		return p.variable()
+	}
+	return []segment{{name: name, matcher: labelMatcher(p.literal())}}
+}
+
+// Variable = "{" FieldPath [ "=" Segments ] "}"
+// "{" is already consumed.
+func (p *pathTemplateParser) variable() []segment {
+	// Simplification: treat FieldPath as LITERAL, instead of IDENT { '.' IDENT }
+	name := p.literal()
+	if p.seenName[name] {
+		p.error(name + " appears multiple times")
+	}
+	p.seenName[name] = true
+	var segs []segment
+	if p.consume('=') {
+		segs = p.segments(name)
+	} else {
+		// "{var}" is equivalent to "{var=*}"
+		segs = []segment{{name: name, matcher: wildcardMatcher(0)}}
+	}
+	if !p.consume('}') {
+		p.error("expected '}'")
+	}
+	return segs
+}
+
+// A literal is any sequence of characters other than a few special ones.
+// The list of stop characters is not quite the same as in the template RFC.
+func (p *pathTemplateParser) literal() string {
+	lit := p.consumeUntil("/*}{=")
+	if lit == "" {
+		p.error("empty literal")
+	}
+	return lit
+}
+
+// Read runes until EOF or one of the runes in stopRunes is encountered.
+// If the latter, unread the stop rune. Return the accumulated runes as a string.
+func (p *pathTemplateParser) consumeUntil(stopRunes string) string {
+	var runes []rune
+	for {
+		r, ok := p.readRune()
+		if !ok {
+			break
+		}
+		if strings.IndexRune(stopRunes, r) >= 0 {
+			p.unreadRune()
+			break
+		}
+		runes = append(runes, r)
+	}
+	return string(runes)
+}
+
+// If the next rune is r, consume it and return true.
+// Otherwise, leave the input unchanged and return false.
+func (p *pathTemplateParser) consume(r rune) bool {
+	rr, ok := p.readRune()
+	if !ok {
+		return false
+	}
+	if r == rr {
+		return true
+	}
+	p.unreadRune()
+	return false
+}
+
+// Read the next rune from the input. Return it.
+// The second return value is false at EOF.
+func (p *pathTemplateParser) readRune() (rune, bool) {
+	r, _, err := p.r.ReadRune()
+	if err == io.EOF {
+		return r, false
+	}
+	if err != nil {
+		p.error(err.Error())
+	}
+	p.runeCount++
+	return r, true
+}
+
+// Put the last rune that was read back on the input.
+func (p *pathTemplateParser) unreadRune() {
+	if err := p.r.UnreadRune(); err != nil {
+		p.error(err.Error())
+	}
+	p.runeCount--
+}
diff --git a/go/src/github.com/googleapis/gax-go/path_template_test.go b/go/src/github.com/googleapis/gax-go/path_template_test.go
new file mode 100644
index 0000000..49dea47
--- /dev/null
+++ b/go/src/github.com/googleapis/gax-go/path_template_test.go
@@ -0,0 +1,211 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package gax
+
+import "testing"
+
+func TestPathTemplateMatchRender(t *testing.T) {
+	testCases := []struct {
+		message  string
+		template string
+		path     string
+		values   map[string]string
+	}{
+		{
+			"base",
+			"buckets/*/*/objects/*",
+			"buckets/f/o/objects/bar",
+			map[string]string{"$0": "f", "$1": "o", "$2": "bar"},
+		},
+		{
+			"path wildcards",
+			"bar/**/foo/*",
+			"bar/foo/foo/foo/bar",
+			map[string]string{"$0": "foo/foo", "$1": "bar"},
+		},
+		{
+			"named binding",
+			"buckets/{foo}/objects/*",
+			"buckets/foo/objects/bar",
+			map[string]string{"$0": "bar", "foo": "foo"},
+		},
+		{
+			"named binding with colon",
+			"buckets/{foo}/objects/*",
+			"buckets/foo:boo/objects/bar",
+			map[string]string{"$0": "bar", "foo": "foo:boo"},
+		},
+		{
+			"named binding with complex patterns",
+			"buckets/{foo=x/*/y/**}/objects/*",
+			"buckets/x/foo/y/bar/baz/objects/quox",
+			map[string]string{"$0": "quox", "foo": "x/foo/y/bar/baz"},
+		},
+		{
+			"starts with slash",
+			"/foo/*",
+			"/foo/bar",
+			map[string]string{"$0": "bar"},
+		},
+	}
+	for _, testCase := range testCases {
+		pt, err := NewPathTemplate(testCase.template)
+		if err != nil {
+			t.Errorf("[%s] Failed to parse template %s: %v", testCase.message, testCase.template, err)
+			continue
+		}
+		values, err := pt.Match(testCase.path)
+		if err != nil {
+			t.Errorf("[%s] PathTemplate '%s' failed to match with '%s', %v", testCase.message, testCase.template, testCase.path, err)
+			continue
+		}
+		for key, expected := range testCase.values {
+			actual, ok := values[key]
+			if !ok {
+				t.Errorf("[%s] The matched data misses the value for %s", testCase.message, key)
+				continue
+			}
+			delete(values, key)
+			if actual != expected {
+				t.Errorf("[%s] Failed to match: value for '%s' is expected '%s' but is actually '%s'", testCase.message, key, expected, actual)
+			}
+		}
+		if len(values) != 0 {
+			t.Errorf("[%s] The matched data has unexpected keys: %v", testCase.message, values)
+		}
+		built, err := pt.Render(testCase.values)
+		if err != nil || built != testCase.path {
+			t.Errorf("[%s] Built path '%s' is different from the expected '%s', %v", testCase.message, built, testCase.path, err)
+		}
+	}
+}
+
+func TestPathTemplateMatchFailure(t *testing.T) {
+	testCases := []struct {
+		message  string
+		template string
+		path     string
+	}{
+		{
+			"too many paths",
+			"buckets/*/*/objects/*",
+			"buckets/f/o/o/objects/bar",
+		},
+		{
+			"missing last path",
+			"buckets/*/*/objects/*",
+			"buckets/f/o/objects",
+		},
+		{
+			"too many paths at end",
+			"buckets/*/*/objects/*",
+			"buckets/f/o/objects/too/long",
+		},
+	}
+	for _, testCase := range testCases {
+		pt, err := NewPathTemplate(testCase.template)
+		if err != nil {
+			t.Errorf("[%s] Failed to parse path %s: %v", testCase.message, testCase.template, err)
+			continue
+		}
+		if values, err := pt.Match(testCase.path); err == nil {
+			t.Errorf("[%s] PathTemplate %s doesn't expect to match %s, but succeeded somehow. Match result: %v", testCase.message, testCase.template, testCase.path, values)
+
+		}
+	}
+}
+
+func TestPathTemplateRenderTooManyValues(t *testing.T) {
+	// Test cases where Render() succeeds but Match() doesn't return the same map.
+	testCases := []struct {
+		message  string
+		template string
+		values   map[string]string
+		expected string
+	}{
+		{
+			"too many",
+			"bar/*/foo/*",
+			map[string]string{"$0": "_1", "$1": "_2", "$2": "_3"},
+			"bar/_1/foo/_2",
+		},
+	}
+	for _, testCase := range testCases {
+		pt, err := NewPathTemplate(testCase.template)
+		if err != nil {
+			t.Errorf("[%s] Failed to parse template %s (error %v)", testCase.message, testCase.template, err)
+			continue
+		}
+		if result, err := pt.Render(testCase.values); err != nil || result != testCase.expected {
+			t.Errorf("[%s] Failed to build the path (expected '%s' but returned '%s'", testCase.message, testCase.expected, result)
+		}
+	}
+}
+
+func TestPathTemplateParseErrors(t *testing.T) {
+	testCases := []struct {
+		message  string
+		template string
+	}{
+		{
+			"multiple path wildcards",
+			"foo/**/bar/**",
+		},
+		{
+			"recursive named bindings",
+			"foo/{foo=foo/{bar}/baz/*}/baz/*",
+		},
+		{
+			"complicated multiple path wildcards patterns",
+			"foo/{foo=foo/**/bar/*}/baz/**",
+		},
+		{
+			"consective slashes",
+			"foo//bar",
+		},
+		{
+			"invalid variable pattern",
+			"foo/{foo=foo/*/}bar",
+		},
+		{
+			"same name multiple times",
+			"foo/{foo}/bar/{foo}",
+		},
+		{
+			"empty string after '='",
+			"foo/{foo=}/bar",
+		},
+	}
+	for _, testCase := range testCases {
+		if pt, err := NewPathTemplate(testCase.template); err == nil {
+			t.Errorf("[%s] Template '%s' should fail to be parsed, but succeeded and returned %+v", testCase.message, testCase.template, pt)
+		}
+	}
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/CONTRIBUTING.md b/go/src/github.com/googleapis/proto-client-go/CONTRIBUTING.md
new file mode 100644
index 0000000..2827b7d
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/CONTRIBUTING.md
@@ -0,0 +1,27 @@
+Want to contribute? Great! First, read this page (including the small print at the end).
+
+### Before you contribute
+Before we can use your code, you must sign the
+[Google Individual Contributor License Agreement]
+(https://cla.developers.google.com/about/google-individual)
+(CLA), which you can do online. The CLA is necessary mainly because you own the
+copyright to your changes, even after your contribution becomes part of our
+codebase, so we need your permission to use and distribute your code. We also
+need to be sure of various other things—for instance that you'll tell us if you
+know that your code infringes on other people's patents. You don't have to sign
+the CLA until after you've submitted your code for review and a member has
+approved it, but you must do it before we can put your code into our codebase.
+Before you start working on a larger contribution, you should get in touch with
+us first through the issue tracker with your idea so that we can help out and
+possibly guide you. Coordinating up front makes it much easier to avoid
+frustration later on.
+
+### Code reviews
+All submissions, including submissions by project members, require review. We
+use Github pull requests for this purpose.
+
+### The small print
+Contributions made by corporations are covered by a different agreement than
+the one above, the
+[Software Grant and Corporate Contributor License Agreement]
+(https://cla.developers.google.com/about/google-corporate).
diff --git a/go/src/github.com/googleapis/proto-client-go/LICENSE b/go/src/github.com/googleapis/proto-client-go/LICENSE
new file mode 100644
index 0000000..6d16b65
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/LICENSE
@@ -0,0 +1,27 @@
+Copyright 2016, Google Inc.
+All rights reserved.
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/go/src/github.com/googleapis/proto-client-go/README.google b/go/src/github.com/googleapis/proto-client-go/README.google
new file mode 100644
index 0000000..c7dd675
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/README.google
@@ -0,0 +1,10 @@
+URL: https://github.com/googleapis/proto-client-go/archive/43a2ab8c54baf68a3ce9a4792bd62f0331bab554.zip
+Version: 43a2ab8c54baf68a3ce9a4792bd62f0331bab554
+License: New BSD
+License File: LICENSE
+
+Description:
+Generated proto and gRPC classes for Google Cloud Platform in Go
+
+Local Modifications:
+No modifications.
diff --git a/go/src/github.com/googleapis/proto-client-go/README.md b/go/src/github.com/googleapis/proto-client-go/README.md
new file mode 100644
index 0000000..8459c9b
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/README.md
@@ -0,0 +1,8 @@
+Generated proto and gRPC classes for Google Cloud Platform in Go
+================================================================
+
+This repository contains the Go classes generated from protos contained in
+[Google APIs][].
+
+[gRPC]: http://grpc.io
+[Google APIs]: https://github.com/googleapis/googleapis/
diff --git a/go/src/github.com/googleapis/proto-client-go/api/label.pb.go b/go/src/github.com/googleapis/proto-client-go/api/label.pb.go
new file mode 100644
index 0000000..871df81
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/api/label.pb.go
@@ -0,0 +1,81 @@
+// Code generated by protoc-gen-go.
+// source: google/api/label.proto
+// DO NOT EDIT!
+
+package api
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// Value types that can be used as label values.
+type LabelDescriptor_ValueType int32
+
+const (
+	// A variable-length string. This is the default.
+	LabelDescriptor_STRING LabelDescriptor_ValueType = 0
+	// Boolean; true or false.
+	LabelDescriptor_BOOL LabelDescriptor_ValueType = 1
+	// A 64-bit signed integer.
+	LabelDescriptor_INT64 LabelDescriptor_ValueType = 2
+)
+
+var LabelDescriptor_ValueType_name = map[int32]string{
+	0: "STRING",
+	1: "BOOL",
+	2: "INT64",
+}
+var LabelDescriptor_ValueType_value = map[string]int32{
+	"STRING": 0,
+	"BOOL":   1,
+	"INT64":  2,
+}
+
+func (x LabelDescriptor_ValueType) String() string {
+	return proto.EnumName(LabelDescriptor_ValueType_name, int32(x))
+}
+func (LabelDescriptor_ValueType) EnumDescriptor() ([]byte, []int) { return fileDescriptor2, []int{0, 0} }
+
+// A description of a label.
+type LabelDescriptor struct {
+	// The label key.
+	Key string `protobuf:"bytes,1,opt,name=key" json:"key,omitempty"`
+	// The type of data that can be assigned to the label.
+	ValueType LabelDescriptor_ValueType `protobuf:"varint,2,opt,name=value_type,json=valueType,enum=google.api.LabelDescriptor_ValueType" json:"value_type,omitempty"`
+	// A human-readable description for the label.
+	Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+}
+
+func (m *LabelDescriptor) Reset()                    { *m = LabelDescriptor{} }
+func (m *LabelDescriptor) String() string            { return proto.CompactTextString(m) }
+func (*LabelDescriptor) ProtoMessage()               {}
+func (*LabelDescriptor) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} }
+
+func init() {
+	proto.RegisterType((*LabelDescriptor)(nil), "google.api.LabelDescriptor")
+	proto.RegisterEnum("google.api.LabelDescriptor_ValueType", LabelDescriptor_ValueType_name, LabelDescriptor_ValueType_value)
+}
+
+var fileDescriptor2 = []byte{
+	// 239 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x12, 0x4b, 0xcf, 0xcf, 0x4f,
+	0xcf, 0x49, 0xd5, 0x4f, 0x2c, 0xc8, 0xd4, 0xcf, 0x49, 0x4c, 0x4a, 0xcd, 0xd1, 0x2b, 0x28, 0xca,
+	0x2f, 0xc9, 0x17, 0xe2, 0x82, 0x88, 0xeb, 0x01, 0xc5, 0x95, 0x76, 0x32, 0x72, 0xf1, 0xfb, 0x80,
+	0xe4, 0x5c, 0x52, 0x8b, 0x93, 0x8b, 0x32, 0x0b, 0x4a, 0xf2, 0x8b, 0x84, 0x04, 0xb8, 0x98, 0xb3,
+	0x53, 0x2b, 0x25, 0x18, 0x15, 0x18, 0x35, 0x38, 0x83, 0x40, 0x4c, 0x21, 0x17, 0x2e, 0xae, 0xb2,
+	0xc4, 0x9c, 0xd2, 0xd4, 0xf8, 0x92, 0xca, 0x82, 0x54, 0x09, 0x26, 0xa0, 0x04, 0x9f, 0x91, 0xaa,
+	0x1e, 0xc2, 0x18, 0x3d, 0x34, 0x23, 0xf4, 0xc2, 0x40, 0xaa, 0x43, 0x80, 0x8a, 0x83, 0x38, 0xcb,
+	0x60, 0x4c, 0x21, 0x05, 0x2e, 0xee, 0x14, 0xa8, 0x92, 0xcc, 0xfc, 0x3c, 0x09, 0x66, 0xb0, 0xf9,
+	0xc8, 0x42, 0x4a, 0x3a, 0x5c, 0x9c, 0x70, 0x9d, 0x42, 0x5c, 0x5c, 0x6c, 0xc1, 0x21, 0x41, 0x9e,
+	0x7e, 0xee, 0x02, 0x0c, 0x42, 0x1c, 0x5c, 0x2c, 0x4e, 0xfe, 0xfe, 0x3e, 0x02, 0x8c, 0x42, 0x9c,
+	0x5c, 0xac, 0x9e, 0x7e, 0x21, 0x66, 0x26, 0x02, 0x4c, 0x4e, 0x9e, 0x5c, 0x7c, 0xc9, 0xf9, 0xb9,
+	0x48, 0xce, 0x70, 0xe2, 0x02, 0xbb, 0x23, 0x00, 0xe4, 0xcb, 0x00, 0xc6, 0x28, 0xcd, 0xf4, 0xcc,
+	0x92, 0x8c, 0xd2, 0x24, 0x3d, 0xa0, 0x22, 0x7d, 0x88, 0x22, 0xa0, 0x9a, 0x62, 0x7d, 0x70, 0x20,
+	0xe8, 0x26, 0xe7, 0x64, 0xa6, 0xe6, 0x95, 0xe8, 0xa6, 0xe7, 0x83, 0x82, 0x27, 0x89, 0x0d, 0x2c,
+	0x68, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0xa6, 0x63, 0xc3, 0x62, 0x33, 0x01, 0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/api/monitored_resource.pb.go b/go/src/github.com/googleapis/proto-client-go/api/monitored_resource.pb.go
new file mode 100644
index 0000000..fca87be
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/api/monitored_resource.pb.go
@@ -0,0 +1,102 @@
+// Code generated by protoc-gen-go.
+// source: google/api/monitored_resource.proto
+// DO NOT EDIT!
+
+package api
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// A descriptor that describes the schema of [MonitoredResource][google.api.MonitoredResource].
+type MonitoredResourceDescriptor struct {
+	// The monitored resource type. For example, the type `"cloudsql_database"`
+	// represents databases in Google Cloud SQL.
+	Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
+	// A concise name for the monitored resource type that can be displayed in
+	// user interfaces. For example, `"Google Cloud SQL Database"`.
+	DisplayName string `protobuf:"bytes,2,opt,name=display_name,json=displayName" json:"display_name,omitempty"`
+	// A detailed description of the monitored resource type that can be used in
+	// documentation.
+	Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+	// A set of labels that can be used to describe instances of this monitored
+	// resource type. For example, Google Cloud SQL databases can be labeled with
+	// their `"database_id"` and their `"zone"`.
+	Labels []*LabelDescriptor `protobuf:"bytes,4,rep,name=labels" json:"labels,omitempty"`
+}
+
+func (m *MonitoredResourceDescriptor) Reset()                    { *m = MonitoredResourceDescriptor{} }
+func (m *MonitoredResourceDescriptor) String() string            { return proto.CompactTextString(m) }
+func (*MonitoredResourceDescriptor) ProtoMessage()               {}
+func (*MonitoredResourceDescriptor) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{0} }
+
+func (m *MonitoredResourceDescriptor) GetLabels() []*LabelDescriptor {
+	if m != nil {
+		return m.Labels
+	}
+	return nil
+}
+
+// A monitored resource describes a resource that can be used for monitoring
+// purpose. It can also be used for logging, billing, and other purposes. Each
+// resource has a `type` and a set of `labels`. The labels contain information
+// that identifies the resource and describes attributes of it. For example,
+// you can use monitored resource to describe a normal file, where the resource
+// has `type` as `"file"`, the label `path` identifies the file, and the label
+// `size` describes the file size. The monitoring system can use a set of
+// monitored resources of files to generate file size distribution.
+type MonitoredResource struct {
+	// The monitored resource type. This field must match the corresponding
+	// [MonitoredResourceDescriptor.type][google.api.MonitoredResourceDescriptor.type] to this resource..  For example,
+	// `"cloudsql_database"` represents Cloud SQL databases.
+	Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
+	// Values for some or all of the labels listed in the associated monitored
+	// resource descriptor. For example, you specify a specific Cloud SQL database
+	// by supplying values for both the `"database_id"` and `"zone"` labels.
+	Labels map[string]string `protobuf:"bytes,2,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
+}
+
+func (m *MonitoredResource) Reset()                    { *m = MonitoredResource{} }
+func (m *MonitoredResource) String() string            { return proto.CompactTextString(m) }
+func (*MonitoredResource) ProtoMessage()               {}
+func (*MonitoredResource) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{1} }
+
+func (m *MonitoredResource) GetLabels() map[string]string {
+	if m != nil {
+		return m.Labels
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*MonitoredResourceDescriptor)(nil), "google.api.MonitoredResourceDescriptor")
+	proto.RegisterType((*MonitoredResource)(nil), "google.api.MonitoredResource")
+}
+
+var fileDescriptor3 = []byte{
+	// 300 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x6c, 0x51, 0xcf, 0x4b, 0xfb, 0x30,
+	0x14, 0x27, 0xeb, 0xbe, 0x83, 0xef, 0xab, 0x88, 0x06, 0x19, 0x65, 0xbb, 0xd4, 0x79, 0xd9, 0x0e,
+	0x6b, 0xc1, 0x5d, 0xd4, 0x9b, 0x43, 0x6f, 0x2a, 0xa3, 0xe0, 0xc5, 0xcb, 0xe8, 0x8f, 0x50, 0x83,
+	0x69, 0x53, 0x92, 0x54, 0xe8, 0x1f, 0xe4, 0xc9, 0x7f, 0xd2, 0x36, 0x4d, 0x6d, 0xa1, 0xde, 0x92,
+	0xcf, 0xfb, 0xfc, 0xe2, 0x3d, 0xb8, 0x4a, 0x39, 0x4f, 0x19, 0xf1, 0xc3, 0x82, 0xfa, 0x19, 0xcf,
+	0xa9, 0xe2, 0x82, 0x24, 0x47, 0x41, 0x24, 0x2f, 0x45, 0x4c, 0xbc, 0x42, 0x70, 0xc5, 0x31, 0xb4,
+	0x24, 0xaf, 0x26, 0x2d, 0xe6, 0x03, 0x01, 0x0b, 0x23, 0xc2, 0x5a, 0xce, 0xea, 0x1b, 0xc1, 0xf2,
+	0xb9, 0x33, 0x08, 0x8c, 0xfe, 0x81, 0xc8, 0x58, 0xd0, 0xa2, 0xc6, 0x30, 0x86, 0xa9, 0xaa, 0x0a,
+	0xe2, 0x20, 0x17, 0xad, 0xff, 0x07, 0xfa, 0x8d, 0x2f, 0xe1, 0x24, 0xa1, 0xb2, 0x60, 0x61, 0x75,
+	0xcc, 0xc3, 0x8c, 0x38, 0x13, 0x3d, 0xb3, 0x0d, 0xf6, 0x52, 0x43, 0xd8, 0x05, 0x3b, 0x31, 0x26,
+	0x94, 0xe7, 0x8e, 0x65, 0x18, 0x3d, 0x84, 0x77, 0x30, 0xd3, 0x3d, 0xa4, 0x33, 0x75, 0xad, 0xb5,
+	0x7d, 0xbd, 0xf4, 0xfa, 0xb6, 0xde, 0x53, 0x33, 0xe9, 0x5b, 0x04, 0x86, 0xba, 0xfa, 0x42, 0x70,
+	0x3e, 0x6a, 0xfb, 0x67, 0xc7, 0xfb, 0x5f, 0xfb, 0x89, 0xb6, 0xdf, 0x0c, 0xed, 0x47, 0x16, 0x6d,
+	0xa0, 0x7c, 0xcc, 0x95, 0xa8, 0xba, 0xb0, 0xc5, 0x2d, 0xd8, 0x03, 0x18, 0x9f, 0x81, 0xf5, 0x41,
+	0x2a, 0x13, 0xd2, 0x3c, 0xf1, 0x05, 0xfc, 0xfb, 0x0c, 0x59, 0xd9, 0x2d, 0xa0, 0xfd, 0xdc, 0x4d,
+	0x6e, 0xd0, 0xfe, 0x15, 0x4e, 0x63, 0x9e, 0x0d, 0x22, 0xf7, 0xf3, 0x51, 0xe6, 0xa1, 0xd9, 0xff,
+	0x01, 0xbd, 0x6d, 0x52, 0xaa, 0xde, 0xcb, 0xc8, 0xab, 0x05, 0x7e, 0x2b, 0xa8, 0xf9, 0xd2, 0xd7,
+	0xe7, 0xd9, 0xc6, 0x8c, 0x92, 0x5c, 0x6d, 0x53, 0xde, 0x1c, 0x2e, 0x9a, 0x69, 0x70, 0xf7, 0x13,
+	0x00, 0x00, 0xff, 0xff, 0xca, 0x0d, 0x92, 0x46, 0xfe, 0x01, 0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/type_/http_request.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/type_/http_request.pb.go
new file mode 100644
index 0000000..58b5bb5
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/type_/http_request.pb.go
@@ -0,0 +1,101 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/type/http_request.proto
+// DO NOT EDIT!
+
+/*
+Package type_ is a generated protocol buffer package.
+
+It is generated from these files:
+	google/logging/type/http_request.proto
+	google/logging/type/log_severity.proto
+
+It has these top-level messages:
+	HttpRequest
+*/
+package type_
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/googleapis/proto-client-go/api"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
+// A common proto for logging HTTP requests.
+//
+type HttpRequest struct {
+	// The request method. Examples: `"GET"`, `"HEAD"`, `"PUT"`, `"POST"`.
+	RequestMethod string `protobuf:"bytes,1,opt,name=request_method,json=requestMethod" json:"request_method,omitempty"`
+	// The scheme (http, https), the host name, the path and the query
+	// portion of the URL that was requested.
+	// Example: `"http://example.com/some/info?color=red"`.
+	RequestUrl string `protobuf:"bytes,2,opt,name=request_url,json=requestUrl" json:"request_url,omitempty"`
+	// The size of the HTTP request message in bytes, including the request
+	// headers and the request body.
+	RequestSize int64 `protobuf:"varint,3,opt,name=request_size,json=requestSize" json:"request_size,omitempty"`
+	// The response code indicating the status of response.
+	// Examples: 200, 404.
+	Status int32 `protobuf:"varint,4,opt,name=status" json:"status,omitempty"`
+	// The size of the HTTP response message sent back to the client, in bytes,
+	// including the response headers and the response body.
+	ResponseSize int64 `protobuf:"varint,5,opt,name=response_size,json=responseSize" json:"response_size,omitempty"`
+	// The user agent sent by the client. Example:
+	// `"Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Q312461; .NET CLR 1.0.3705)"`.
+	UserAgent string `protobuf:"bytes,6,opt,name=user_agent,json=userAgent" json:"user_agent,omitempty"`
+	// The IP address (IPv4 or IPv6) of the client that issued the HTTP
+	// request. Examples: `"192.168.1.1"`, `"FE80::0202:B3FF:FE1E:8329"`.
+	RemoteIp string `protobuf:"bytes,7,opt,name=remote_ip,json=remoteIp" json:"remote_ip,omitempty"`
+	// The referer URL of the request, as defined in
+	// [HTTP/1.1 Header Field Definitions](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).
+	Referer string `protobuf:"bytes,8,opt,name=referer" json:"referer,omitempty"`
+	// Whether or not an entity was served from cache
+	// (with or without validation).
+	CacheHit bool `protobuf:"varint,9,opt,name=cache_hit,json=cacheHit" json:"cache_hit,omitempty"`
+	// Whether or not the response was validated with the origin server before
+	// being served from cache. This field is only meaningful if `cache_hit` is
+	// True.
+	ValidatedWithOriginServer bool `protobuf:"varint,10,opt,name=validated_with_origin_server,json=validatedWithOriginServer" json:"validated_with_origin_server,omitempty"`
+}
+
+func (m *HttpRequest) Reset()                    { *m = HttpRequest{} }
+func (m *HttpRequest) String() string            { return proto.CompactTextString(m) }
+func (*HttpRequest) ProtoMessage()               {}
+func (*HttpRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+func init() {
+	proto.RegisterType((*HttpRequest)(nil), "google.logging.type.HttpRequest")
+}
+
+var fileDescriptor0 = []byte{
+	// 368 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x6c, 0x92, 0xcd, 0x4e, 0xe3, 0x30,
+	0x14, 0x85, 0xd5, 0x76, 0xda, 0x26, 0x6e, 0x67, 0x34, 0xf2, 0x48, 0x33, 0x1e, 0x28, 0xe2, 0x4f,
+	0x20, 0x36, 0x4d, 0x16, 0x7d, 0x00, 0x44, 0x57, 0x65, 0x81, 0xa8, 0x52, 0x21, 0x24, 0x36, 0x56,
+	0x9a, 0x5e, 0x1c, 0x4b, 0x69, 0x1c, 0x6c, 0xa7, 0x08, 0xde, 0x98, 0xb7, 0xe0, 0xc6, 0x49, 0x10,
+	0x48, 0x2c, 0x7d, 0xee, 0x77, 0xce, 0x75, 0x72, 0x4c, 0xce, 0x85, 0x52, 0x22, 0x83, 0x30, 0x53,
+	0x42, 0xc8, 0x5c, 0x84, 0xf6, 0xa5, 0x80, 0x30, 0xb5, 0xb6, 0xe0, 0x1a, 0x9e, 0x4a, 0x30, 0x36,
+	0x28, 0xb4, 0xb2, 0x8a, 0xfe, 0xa9, 0xb9, 0xa0, 0xe1, 0x82, 0x8a, 0xdb, 0x9b, 0x34, 0xe6, 0xb8,
+	0x90, 0x61, 0x9c, 0xe7, 0xca, 0xc6, 0x56, 0xaa, 0xdc, 0xd4, 0x96, 0x93, 0xb7, 0x2e, 0x19, 0x2d,
+	0x30, 0x29, 0xaa, 0x83, 0xe8, 0x19, 0xf9, 0xd5, 0x64, 0xf2, 0x2d, 0xd8, 0x54, 0x6d, 0x58, 0xe7,
+	0xa8, 0x73, 0xe1, 0x47, 0x3f, 0x1b, 0xf5, 0xc6, 0x89, 0xf4, 0x90, 0x8c, 0x5a, 0xac, 0xd4, 0x19,
+	0xeb, 0x3a, 0x86, 0x34, 0xd2, 0x9d, 0xce, 0xe8, 0x31, 0x19, 0xb7, 0x80, 0x91, 0xaf, 0xc0, 0x7a,
+	0x48, 0xf4, 0xa2, 0xd6, 0xb4, 0x42, 0x89, 0xfe, 0x25, 0x03, 0x83, 0x97, 0x29, 0x0d, 0xfb, 0x81,
+	0xc3, 0x7e, 0xd4, 0x9c, 0xe8, 0x29, 0xc1, 0x65, 0xa6, 0xc0, 0x3b, 0x42, 0xed, 0xed, 0x3b, 0xef,
+	0xb8, 0x15, 0x9d, 0xf9, 0x80, 0x90, 0xd2, 0x80, 0xe6, 0xb1, 0x80, 0xdc, 0xb2, 0x81, 0xdb, 0xef,
+	0x57, 0xca, 0x55, 0x25, 0xd0, 0x7d, 0xe2, 0x6b, 0xd8, 0x2a, 0x0b, 0x5c, 0x16, 0x6c, 0xe8, 0xa6,
+	0x5e, 0x2d, 0x5c, 0x17, 0x94, 0x91, 0xa1, 0x86, 0x47, 0xd0, 0xa0, 0x99, 0xe7, 0x46, 0xed, 0xb1,
+	0xb2, 0x25, 0x71, 0x92, 0x02, 0x4f, 0xa5, 0x65, 0x3e, 0xce, 0xbc, 0xc8, 0x73, 0xc2, 0x42, 0x5a,
+	0x7a, 0x49, 0x26, 0xbb, 0x38, 0x93, 0x9b, 0xd8, 0xc2, 0x86, 0x3f, 0x4b, 0x9b, 0x72, 0xa5, 0x25,
+	0xfe, 0x67, 0x8e, 0x5b, 0x77, 0x98, 0x45, 0x1c, 0xff, 0xff, 0x83, 0xb9, 0x47, 0xe4, 0xd6, 0x11,
+	0x2b, 0x07, 0xcc, 0xd7, 0xe4, 0x5f, 0xa2, 0xb6, 0xc1, 0x37, 0x25, 0xcd, 0x7f, 0x7f, 0xea, 0x60,
+	0x59, 0x15, 0xb3, 0xec, 0x3c, 0xcc, 0x04, 0x06, 0x94, 0xeb, 0x00, 0x3d, 0x61, 0xed, 0xc1, 0x0a,
+	0x4d, 0xe8, 0x7a, 0x9b, 0x26, 0x99, 0xc4, 0xef, 0x9c, 0x0a, 0xf5, 0xe5, 0x51, 0xf0, 0xf5, 0xc0,
+	0x8d, 0x67, 0xef, 0x01, 0x00, 0x00, 0xff, 0xff, 0xb9, 0xe5, 0x79, 0xd4, 0x33, 0x02, 0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/type_/log_severity.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/type_/log_severity.pb.go
new file mode 100644
index 0000000..b5a7189
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/type_/log_severity.pb.go
@@ -0,0 +1,101 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/type/log_severity.proto
+// DO NOT EDIT!
+
+package type_
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/googleapis/proto-client-go/api"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// The severity of the event described in a log entry.  These guideline severity
+// levels are ordered, with numerically smaller levels treated as less severe
+// than numerically larger levels. If the source of the log entries uses a
+// different set of severity levels, the client should select the closest
+// corresponding `LogSeverity` value. For example, Java's FINE, FINER, and
+// FINEST levels might all map to `LogSeverity.DEBUG`. If the original severity
+// code must be preserved, it can be stored in the payload.
+//
+type LogSeverity int32
+
+const (
+	// The log entry has no assigned severity level.
+	LogSeverity_DEFAULT LogSeverity = 0
+	// Debug or trace information.
+	LogSeverity_DEBUG LogSeverity = 100
+	// Routine information, such as ongoing status or performance.
+	LogSeverity_INFO LogSeverity = 200
+	// Normal but significant events, such as start up, shut down, or
+	// configuration.
+	LogSeverity_NOTICE LogSeverity = 300
+	// Warning events might cause problems.
+	LogSeverity_WARNING LogSeverity = 400
+	// Error events are likely to cause problems.
+	LogSeverity_ERROR LogSeverity = 500
+	// Critical events cause more severe problems or brief outages.
+	LogSeverity_CRITICAL LogSeverity = 600
+	// A person must take an action immediately.
+	LogSeverity_ALERT LogSeverity = 700
+	// One or more systems are unusable.
+	LogSeverity_EMERGENCY LogSeverity = 800
+)
+
+var LogSeverity_name = map[int32]string{
+	0:   "DEFAULT",
+	100: "DEBUG",
+	200: "INFO",
+	300: "NOTICE",
+	400: "WARNING",
+	500: "ERROR",
+	600: "CRITICAL",
+	700: "ALERT",
+	800: "EMERGENCY",
+}
+var LogSeverity_value = map[string]int32{
+	"DEFAULT":   0,
+	"DEBUG":     100,
+	"INFO":      200,
+	"NOTICE":    300,
+	"WARNING":   400,
+	"ERROR":     500,
+	"CRITICAL":  600,
+	"ALERT":     700,
+	"EMERGENCY": 800,
+}
+
+func (x LogSeverity) String() string {
+	return proto.EnumName(LogSeverity_name, int32(x))
+}
+func (LogSeverity) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
+
+func init() {
+	proto.RegisterEnum("google.logging.type.LogSeverity", LogSeverity_name, LogSeverity_value)
+}
+
+var fileDescriptor1 = []byte{
+	// 274 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x52, 0x4b, 0xcf, 0xcf, 0x4f,
+	0xcf, 0x49, 0xd5, 0xcf, 0xc9, 0x4f, 0x4f, 0xcf, 0xcc, 0x4b, 0xd7, 0x2f, 0xa9, 0x2c, 0x00, 0x73,
+	0xe2, 0x8b, 0x53, 0xcb, 0x52, 0x8b, 0x32, 0x4b, 0x2a, 0xf5, 0x0a, 0x8a, 0xf2, 0x4b, 0xf2, 0x85,
+	0x84, 0x21, 0xea, 0xf4, 0xa0, 0xea, 0xf4, 0x40, 0xea, 0xa4, 0x64, 0xa0, 0x9a, 0x13, 0x0b, 0x32,
+	0xf5, 0x13, 0xf3, 0xf2, 0xf2, 0x4b, 0x12, 0x4b, 0x32, 0xf3, 0xf3, 0x8a, 0x21, 0x5a, 0xb4, 0x9a,
+	0x18, 0xb9, 0xb8, 0x7d, 0xf2, 0xd3, 0x83, 0xa1, 0x06, 0x09, 0x71, 0x73, 0xb1, 0xbb, 0xb8, 0xba,
+	0x39, 0x86, 0xfa, 0x84, 0x08, 0x30, 0x08, 0x71, 0x72, 0xb1, 0xba, 0xb8, 0x3a, 0x85, 0xba, 0x0b,
+	0xa4, 0x00, 0x99, 0x2c, 0x9e, 0x7e, 0x6e, 0xfe, 0x02, 0x27, 0x18, 0x81, 0x4a, 0xd8, 0xfc, 0xfc,
+	0x43, 0x3c, 0x9d, 0x5d, 0x05, 0xd6, 0x30, 0x09, 0xf1, 0x70, 0xb1, 0x87, 0x3b, 0x06, 0xf9, 0x79,
+	0xfa, 0xb9, 0x0b, 0x4c, 0x60, 0x16, 0xe2, 0xe2, 0x62, 0x75, 0x0d, 0x0a, 0xf2, 0x0f, 0x12, 0xf8,
+	0xc2, 0x2c, 0xc4, 0xcb, 0xc5, 0xe1, 0x1c, 0xe4, 0x09, 0x54, 0xe7, 0xe8, 0x23, 0x70, 0x83, 0x05,
+	0x24, 0xe5, 0xe8, 0xe3, 0x1a, 0x14, 0x22, 0xb0, 0x87, 0x55, 0x88, 0x8f, 0x8b, 0xd3, 0xd5, 0xd7,
+	0x35, 0xc8, 0xdd, 0xd5, 0xcf, 0x39, 0x52, 0x60, 0x01, 0x9b, 0x53, 0x12, 0x97, 0x78, 0x72, 0x7e,
+	0xae, 0x1e, 0x16, 0xd7, 0x3b, 0x09, 0x20, 0x39, 0x2e, 0x00, 0xe4, 0xe2, 0x00, 0xc6, 0x28, 0xe3,
+	0xf4, 0xcc, 0x92, 0x8c, 0xd2, 0x24, 0x3d, 0xa0, 0x1e, 0x7d, 0x88, 0x1e, 0xa0, 0xdf, 0x8a, 0xf5,
+	0xc1, 0x1e, 0xd2, 0x4d, 0xce, 0xc9, 0x4c, 0xcd, 0x2b, 0xd1, 0x4d, 0xcf, 0x47, 0x09, 0xad, 0xf8,
+	0x24, 0x36, 0xb0, 0xb4, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x65, 0x4e, 0xff, 0x20, 0x4c, 0x01,
+	0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/v2/log_entry.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/v2/log_entry.pb.go
new file mode 100644
index 0000000..12135f1
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/v2/log_entry.pb.go
@@ -0,0 +1,349 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/v2/log_entry.proto
+// DO NOT EDIT!
+
+/*
+Package v2 is a generated protocol buffer package.
+
+It is generated from these files:
+	google/logging/v2/log_entry.proto
+	google/logging/v2/logging_config.proto
+	google/logging/v2/logging_metrics.proto
+	google/logging/v2/logging.proto
+
+It has these top-level messages:
+	LogEntry
+	LogEntryOperation
+	LogSink
+	ListSinksRequest
+	ListSinksResponse
+	GetSinkRequest
+	CreateSinkRequest
+	UpdateSinkRequest
+	DeleteSinkRequest
+	LogMetric
+	ListLogMetricsRequest
+	ListLogMetricsResponse
+	GetLogMetricRequest
+	CreateLogMetricRequest
+	UpdateLogMetricRequest
+	DeleteLogMetricRequest
+	DeleteLogRequest
+	WriteLogEntriesRequest
+	WriteLogEntriesResponse
+	ListLogEntriesRequest
+	ListLogEntriesResponse
+	ListMonitoredResourceDescriptorsRequest
+	ListMonitoredResourceDescriptorsResponse
+*/
+package v2
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import google_api1 "github.com/googleapis/proto-client-go/api"
+import google_logging_type "github.com/googleapis/proto-client-go/logging/type_"
+import google_logging_type1 "github.com/googleapis/proto-client-go/logging/type_"
+import google_protobuf1 "github.com/golang/protobuf/ptypes/any"
+import google_protobuf2 "github.com/golang/protobuf/ptypes/struct"
+import google_protobuf3 "github.com/golang/protobuf/ptypes/timestamp"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
+// An individual entry in a log.
+type LogEntry struct {
+	// Required. The resource name of the log to which this log entry
+	// belongs. The format of the name is
+	// `projects/&lt;project-id&gt;/logs/&lt;log-id%gt;`.  Examples:
+	// `"projects/my-projectid/logs/syslog"`,
+	// `"projects/1234567890/logs/library.googleapis.com%2Fbook_log"`.
+	//
+	// The log ID part of resource name must be less than 512 characters
+	// long and can only include the following characters: upper and
+	// lower case alphanumeric characters: [A-Za-z0-9]; and punctuation
+	// characters: forward-slash, underscore, hyphen, and period.
+	// Forward-slash (`/`) characters in the log ID must be URL-encoded.
+	LogName string `protobuf:"bytes,12,opt,name=log_name,json=logName" json:"log_name,omitempty"`
+	// Required. The monitored resource associated with this log entry.
+	// Example: a log entry that reports a database error would be
+	// associated with the monitored resource designating the particular
+	// database that reported the error.
+	Resource *google_api1.MonitoredResource `protobuf:"bytes,8,opt,name=resource" json:"resource,omitempty"`
+	// Required. The log entry payload, which can be one of multiple types.
+	//
+	// Types that are valid to be assigned to Payload:
+	//	*LogEntry_ProtoPayload
+	//	*LogEntry_TextPayload
+	//	*LogEntry_JsonPayload
+	Payload isLogEntry_Payload `protobuf_oneof:"payload"`
+	// Optional. The time the event described by the log entry occurred.  If
+	// omitted, Cloud Logging will use the time the log entry is written.
+	Timestamp *google_protobuf3.Timestamp `protobuf:"bytes,9,opt,name=timestamp" json:"timestamp,omitempty"`
+	// Optional. The severity of the log entry. The default value is
+	// `LogSeverity.DEFAULT`.
+	Severity google_logging_type1.LogSeverity `protobuf:"varint,10,opt,name=severity,enum=google.logging.type.LogSeverity" json:"severity,omitempty"`
+	// Optional. A unique ID for the log entry. If you provide this field, the
+	// logging service considers other log entries in the same log with the same
+	// ID as duplicates which can be removed.
+	// If omitted, Cloud Logging will generate a unique ID for this log entry.
+	InsertId string `protobuf:"bytes,4,opt,name=insert_id,json=insertId" json:"insert_id,omitempty"`
+	// Optional. Information about the HTTP request associated with this log entry,
+	// if applicable.
+	HttpRequest *google_logging_type.HttpRequest `protobuf:"bytes,7,opt,name=http_request,json=httpRequest" json:"http_request,omitempty"`
+	// Optional. A set of user-defined (key, value) data that provides additional
+	// information about the log entry.
+	Labels map[string]string `protobuf:"bytes,11,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
+	// Optional. Information about an operation associated with the log entry, if
+	// applicable.
+	Operation *LogEntryOperation `protobuf:"bytes,15,opt,name=operation" json:"operation,omitempty"`
+}
+
+func (m *LogEntry) Reset()                    { *m = LogEntry{} }
+func (m *LogEntry) String() string            { return proto.CompactTextString(m) }
+func (*LogEntry) ProtoMessage()               {}
+func (*LogEntry) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+type isLogEntry_Payload interface {
+	isLogEntry_Payload()
+}
+
+type LogEntry_ProtoPayload struct {
+	ProtoPayload *google_protobuf1.Any `protobuf:"bytes,2,opt,name=proto_payload,json=protoPayload,oneof"`
+}
+type LogEntry_TextPayload struct {
+	TextPayload string `protobuf:"bytes,3,opt,name=text_payload,json=textPayload,oneof"`
+}
+type LogEntry_JsonPayload struct {
+	JsonPayload *google_protobuf2.Struct `protobuf:"bytes,6,opt,name=json_payload,json=jsonPayload,oneof"`
+}
+
+func (*LogEntry_ProtoPayload) isLogEntry_Payload() {}
+func (*LogEntry_TextPayload) isLogEntry_Payload()  {}
+func (*LogEntry_JsonPayload) isLogEntry_Payload()  {}
+
+func (m *LogEntry) GetPayload() isLogEntry_Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+func (m *LogEntry) GetResource() *google_api1.MonitoredResource {
+	if m != nil {
+		return m.Resource
+	}
+	return nil
+}
+
+func (m *LogEntry) GetProtoPayload() *google_protobuf1.Any {
+	if x, ok := m.GetPayload().(*LogEntry_ProtoPayload); ok {
+		return x.ProtoPayload
+	}
+	return nil
+}
+
+func (m *LogEntry) GetTextPayload() string {
+	if x, ok := m.GetPayload().(*LogEntry_TextPayload); ok {
+		return x.TextPayload
+	}
+	return ""
+}
+
+func (m *LogEntry) GetJsonPayload() *google_protobuf2.Struct {
+	if x, ok := m.GetPayload().(*LogEntry_JsonPayload); ok {
+		return x.JsonPayload
+	}
+	return nil
+}
+
+func (m *LogEntry) GetTimestamp() *google_protobuf3.Timestamp {
+	if m != nil {
+		return m.Timestamp
+	}
+	return nil
+}
+
+func (m *LogEntry) GetHttpRequest() *google_logging_type.HttpRequest {
+	if m != nil {
+		return m.HttpRequest
+	}
+	return nil
+}
+
+func (m *LogEntry) GetLabels() map[string]string {
+	if m != nil {
+		return m.Labels
+	}
+	return nil
+}
+
+func (m *LogEntry) GetOperation() *LogEntryOperation {
+	if m != nil {
+		return m.Operation
+	}
+	return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*LogEntry) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _LogEntry_OneofMarshaler, _LogEntry_OneofUnmarshaler, _LogEntry_OneofSizer, []interface{}{
+		(*LogEntry_ProtoPayload)(nil),
+		(*LogEntry_TextPayload)(nil),
+		(*LogEntry_JsonPayload)(nil),
+	}
+}
+
+func _LogEntry_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*LogEntry)
+	// payload
+	switch x := m.Payload.(type) {
+	case *LogEntry_ProtoPayload:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.ProtoPayload); err != nil {
+			return err
+		}
+	case *LogEntry_TextPayload:
+		b.EncodeVarint(3<<3 | proto.WireBytes)
+		b.EncodeStringBytes(x.TextPayload)
+	case *LogEntry_JsonPayload:
+		b.EncodeVarint(6<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.JsonPayload); err != nil {
+			return err
+		}
+	case nil:
+	default:
+		return fmt.Errorf("LogEntry.Payload has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _LogEntry_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*LogEntry)
+	switch tag {
+	case 2: // payload.proto_payload
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(google_protobuf1.Any)
+		err := b.DecodeMessage(msg)
+		m.Payload = &LogEntry_ProtoPayload{msg}
+		return true, err
+	case 3: // payload.text_payload
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeStringBytes()
+		m.Payload = &LogEntry_TextPayload{x}
+		return true, err
+	case 6: // payload.json_payload
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(google_protobuf2.Struct)
+		err := b.DecodeMessage(msg)
+		m.Payload = &LogEntry_JsonPayload{msg}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _LogEntry_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*LogEntry)
+	// payload
+	switch x := m.Payload.(type) {
+	case *LogEntry_ProtoPayload:
+		s := proto.Size(x.ProtoPayload)
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *LogEntry_TextPayload:
+		n += proto.SizeVarint(3<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(len(x.TextPayload)))
+		n += len(x.TextPayload)
+	case *LogEntry_JsonPayload:
+		s := proto.Size(x.JsonPayload)
+		n += proto.SizeVarint(6<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+// Additional information about a potentially long-running operation with which
+// a log entry is associated.
+type LogEntryOperation struct {
+	// Required. An arbitrary operation identifier. Log entries with the
+	// same identifier are assumed to be part of the same operation.
+	//
+	Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
+	// Required. An arbitrary producer identifier. The combination of
+	// `id` and `producer` must be globally unique.  Examples for `producer`:
+	// `"MyDivision.MyBigCompany.com"`, "github.com/MyProject/MyApplication"`.
+	//
+	Producer string `protobuf:"bytes,2,opt,name=producer" json:"producer,omitempty"`
+	// Optional. Set this to True if this is the first log entry in the operation.
+	First bool `protobuf:"varint,3,opt,name=first" json:"first,omitempty"`
+	// Optional. Set this to True if this is the last log entry in the operation.
+	Last bool `protobuf:"varint,4,opt,name=last" json:"last,omitempty"`
+}
+
+func (m *LogEntryOperation) Reset()                    { *m = LogEntryOperation{} }
+func (m *LogEntryOperation) String() string            { return proto.CompactTextString(m) }
+func (*LogEntryOperation) ProtoMessage()               {}
+func (*LogEntryOperation) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+func init() {
+	proto.RegisterType((*LogEntry)(nil), "google.logging.v2.LogEntry")
+	proto.RegisterType((*LogEntryOperation)(nil), "google.logging.v2.LogEntryOperation")
+}
+
+var fileDescriptor0 = []byte{
+	// 569 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x7c, 0x94, 0x6f, 0x6f, 0xd3, 0x3e,
+	0x10, 0xc7, 0x7f, 0x5d, 0xbb, 0x36, 0xb9, 0x76, 0xfb, 0x31, 0x6b, 0x88, 0x2c, 0x80, 0x28, 0x1b,
+	0x02, 0x9e, 0xcc, 0x41, 0xe5, 0xc9, 0x06, 0x48, 0x88, 0x22, 0xa4, 0x21, 0x0d, 0x98, 0x3c, 0x1e,
+	0xed, 0x49, 0x95, 0xb6, 0x5e, 0x6a, 0x48, 0xe3, 0xe0, 0x38, 0x15, 0x7d, 0x41, 0xbc, 0x4f, 0xfc,
+	0x27, 0x4e, 0xab, 0x16, 0xed, 0x99, 0xcf, 0xf7, 0xf9, 0xde, 0x9d, 0xef, 0x2e, 0x81, 0xa7, 0x09,
+	0xe7, 0x49, 0x4a, 0xa3, 0x94, 0x27, 0x09, 0xcb, 0x92, 0x68, 0x31, 0xd0, 0xc7, 0x11, 0xcd, 0xa4,
+	0x58, 0xe2, 0x5c, 0x70, 0xc9, 0xd1, 0x81, 0x45, 0x70, 0x85, 0xe0, 0xc5, 0x20, 0x3c, 0xa9, 0x54,
+	0x71, 0xce, 0xa2, 0x39, 0xcf, 0x98, 0xe4, 0x82, 0x4e, 0x47, 0x82, 0x16, 0xbc, 0x14, 0x13, 0x6a,
+	0x75, 0xe1, 0xf3, 0x8d, 0xd0, 0x72, 0x99, 0xd3, 0x68, 0x26, 0x65, 0xae, 0xc0, 0x5f, 0x25, 0x2d,
+	0xe4, 0x5d, 0x9c, 0x2e, 0xa2, 0xa0, 0x0b, 0x2a, 0x98, 0xac, 0xea, 0x08, 0x8f, 0x2a, 0xce, 0x58,
+	0xe3, 0xf2, 0x36, 0x8a, 0x33, 0xe7, 0x7a, 0xb4, 0xe9, 0x2a, 0xa4, 0x28, 0x27, 0x2e, 0xc1, 0x93,
+	0x4d, 0xaf, 0x64, 0x73, 0x95, 0x3e, 0x9e, 0xe7, 0x16, 0x38, 0xfe, 0xb3, 0x0b, 0xde, 0x25, 0x4f,
+	0x3e, 0xe9, 0x47, 0xa3, 0x23, 0xf0, 0x74, 0xf2, 0x2c, 0x9e, 0xd3, 0xa0, 0xd7, 0x6f, 0xbc, 0xf4,
+	0x49, 0x47, 0xd9, 0x5f, 0x95, 0x89, 0xce, 0xc1, 0x73, 0x6f, 0x0c, 0x3c, 0xe5, 0xea, 0x0e, 0x1e,
+	0xe3, 0xaa, 0x39, 0xaa, 0x13, 0xf8, 0x8b, 0xeb, 0x04, 0xa9, 0x20, 0x52, 0xe3, 0xe8, 0x2d, 0xec,
+	0x99, 0x5c, 0xa3, 0x3c, 0x5e, 0xa6, 0x3c, 0x9e, 0x06, 0x3b, 0x46, 0x7f, 0xe8, 0xf4, 0xae, 0x36,
+	0xfc, 0x21, 0x5b, 0x5e, 0xfc, 0x47, 0x7a, 0xc6, 0xbe, 0xb2, 0x2c, 0x3a, 0x81, 0x9e, 0xa4, 0xbf,
+	0x65, 0xad, 0x6d, 0xea, 0xb2, 0x14, 0xd5, 0xd5, 0xb7, 0x0e, 0x7a, 0x07, 0xbd, 0x1f, 0x05, 0xcf,
+	0x6a, 0xa8, 0x6d, 0x12, 0x3c, 0xd8, 0x4a, 0x70, 0x6d, 0x5a, 0xa3, 0xd5, 0x1a, 0x77, 0xea, 0x33,
+	0xf0, 0xeb, 0xae, 0x04, 0xbe, 0x91, 0x86, 0x5b, 0xd2, 0xef, 0x8e, 0x20, 0x2b, 0x58, 0xe5, 0xf5,
+	0xdc, 0xa0, 0x02, 0x50, 0xc2, 0xfd, 0x41, 0x1f, 0x6f, 0x6c, 0x8c, 0x9e, 0x28, 0x56, 0x0d, 0xbe,
+	0xae, 0x38, 0x52, 0x2b, 0xd0, 0x43, 0xf0, 0x59, 0x56, 0x50, 0x21, 0x47, 0x6c, 0x1a, 0xb4, 0x4c,
+	0xbb, 0x3d, 0x7b, 0xf1, 0x79, 0x8a, 0x3e, 0x42, 0x6f, 0x7d, 0x5f, 0x82, 0x8e, 0xa9, 0xeb, 0xdf,
+	0xe1, 0x2f, 0x14, 0x48, 0x2c, 0x47, 0xba, 0xb3, 0x95, 0x81, 0xde, 0x43, 0x3b, 0x8d, 0xc7, 0x34,
+	0x2d, 0x82, 0x6e, 0xbf, 0xa9, 0xe4, 0x2f, 0xf0, 0xd6, 0x3e, 0x63, 0x37, 0x7c, 0x7c, 0x69, 0x48,
+	0x73, 0x26, 0x95, 0x0c, 0x0d, 0xc1, 0xe7, 0x39, 0x15, 0xb1, 0x64, 0x3c, 0x0b, 0xfe, 0x37, 0x25,
+	0x3c, 0xbb, 0x23, 0xc6, 0x37, 0xc7, 0x92, 0x95, 0x2c, 0x3c, 0x87, 0xee, 0x5a, 0x68, 0x74, 0x0f,
+	0x9a, 0x3f, 0xe9, 0x32, 0x68, 0x98, 0xf7, 0xea, 0x23, 0x3a, 0x84, 0xdd, 0x45, 0x9c, 0x96, 0xd4,
+	0xec, 0x85, 0x4f, 0xac, 0xf1, 0x66, 0xe7, 0xac, 0x31, 0xf4, 0xa1, 0x53, 0x8d, 0xf4, 0x98, 0xc1,
+	0xc1, 0x56, 0x16, 0xb4, 0x0f, 0x3b, 0xaa, 0x75, 0x36, 0x94, 0x3a, 0xa1, 0x10, 0x3c, 0x35, 0xb0,
+	0x69, 0x39, 0xa1, 0xa2, 0x0a, 0x56, 0xdb, 0x3a, 0xcb, 0x2d, 0x13, 0xaa, 0x93, 0x7a, 0x83, 0x3c,
+	0x62, 0x0d, 0x84, 0xa0, 0x95, 0xc6, 0xea, 0xb2, 0x65, 0x2e, 0xcd, 0x79, 0x78, 0x03, 0xf7, 0x27,
+	0x7c, 0xbe, 0xfd, 0xcc, 0xe1, 0x9e, 0xab, 0xe0, 0xca, 0x6c, 0x68, 0xe3, 0xe6, 0x55, 0xc2, 0xe4,
+	0xac, 0x1c, 0x63, 0x85, 0x47, 0x16, 0x57, 0xdf, 0x42, 0x61, 0x3f, 0xb6, 0xd3, 0x49, 0xca, 0xd4,
+	0xaf, 0xe4, 0x34, 0xe1, 0x6b, 0x3f, 0x98, 0x71, 0xdb, 0xf8, 0x5e, 0xff, 0x0d, 0x00, 0x00, 0xff,
+	0xff, 0xb9, 0xa1, 0xc3, 0xe5, 0x7c, 0x04, 0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/v2/logging.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging.pb.go
new file mode 100644
index 0000000..901dbc3
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging.pb.go
@@ -0,0 +1,449 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/v2/logging.proto
+// DO NOT EDIT!
+
+package v2
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/googleapis/proto-client-go/api"
+import google_api1 "github.com/googleapis/proto-client-go/api"
+import google_protobuf4 "github.com/golang/protobuf/ptypes/empty"
+
+import (
+	context "golang.org/x/net/context"
+	grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// The parameters to DeleteLog.
+type DeleteLogRequest struct {
+	// Required. The resource name of the log to delete.  Example:
+	// `"projects/my-project/logs/syslog"`.
+	LogName string `protobuf:"bytes,1,opt,name=log_name,json=logName" json:"log_name,omitempty"`
+}
+
+func (m *DeleteLogRequest) Reset()                    { *m = DeleteLogRequest{} }
+func (m *DeleteLogRequest) String() string            { return proto.CompactTextString(m) }
+func (*DeleteLogRequest) ProtoMessage()               {}
+func (*DeleteLogRequest) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{0} }
+
+// The parameters to WriteLogEntries.
+type WriteLogEntriesRequest struct {
+	// Optional. A default log resource name for those log entries in `entries`
+	// that do not specify their own `logName`.  Example:
+	// `"projects/my-project/logs/syslog"`.  See
+	// [LogEntry][google.logging.v2.LogEntry].
+	LogName string `protobuf:"bytes,1,opt,name=log_name,json=logName" json:"log_name,omitempty"`
+	// Optional. A default monitored resource for those log entries in `entries`
+	// that do not specify their own `resource`.
+	Resource *google_api1.MonitoredResource `protobuf:"bytes,2,opt,name=resource" json:"resource,omitempty"`
+	// Optional. User-defined `key:value` items that are added to
+	// the `labels` field of each log entry in `entries`, except when a log
+	// entry specifies its own `key:value` item with the same key.
+	// Example: `{ "size": "large", "color":"red" }`
+	Labels map[string]string `protobuf:"bytes,3,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
+	// Required. The log entries to write. The log entries must have values for
+	// all required fields.
+	Entries []*LogEntry `protobuf:"bytes,4,rep,name=entries" json:"entries,omitempty"`
+}
+
+func (m *WriteLogEntriesRequest) Reset()                    { *m = WriteLogEntriesRequest{} }
+func (m *WriteLogEntriesRequest) String() string            { return proto.CompactTextString(m) }
+func (*WriteLogEntriesRequest) ProtoMessage()               {}
+func (*WriteLogEntriesRequest) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{1} }
+
+func (m *WriteLogEntriesRequest) GetResource() *google_api1.MonitoredResource {
+	if m != nil {
+		return m.Resource
+	}
+	return nil
+}
+
+func (m *WriteLogEntriesRequest) GetLabels() map[string]string {
+	if m != nil {
+		return m.Labels
+	}
+	return nil
+}
+
+func (m *WriteLogEntriesRequest) GetEntries() []*LogEntry {
+	if m != nil {
+		return m.Entries
+	}
+	return nil
+}
+
+// Result returned from WriteLogEntries.
+type WriteLogEntriesResponse struct {
+}
+
+func (m *WriteLogEntriesResponse) Reset()                    { *m = WriteLogEntriesResponse{} }
+func (m *WriteLogEntriesResponse) String() string            { return proto.CompactTextString(m) }
+func (*WriteLogEntriesResponse) ProtoMessage()               {}
+func (*WriteLogEntriesResponse) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{2} }
+
+// The parameters to `ListLogEntries`.
+type ListLogEntriesRequest struct {
+	// Required. One or more project IDs or project numbers from which to retrieve
+	// log entries.  Examples of a project ID: `"my-project-1A"`, `"1234567890"`.
+	ProjectIds []string `protobuf:"bytes,1,rep,name=project_ids,json=projectIds" json:"project_ids,omitempty"`
+	// Optional. An [advanced logs filter](/logging/docs/view/advanced_filters).
+	// The filter is compared against all log entries in the projects specified by
+	// `projectIds`.  Only entries that match the filter are retrieved.  An empty
+	// filter matches all log entries.
+	Filter string `protobuf:"bytes,2,opt,name=filter" json:"filter,omitempty"`
+	// Optional. How the results should be sorted.  Presently, the only permitted
+	// values are `"timestamp"` (default) and `"timestamp desc"`.  The first
+	// option returns entries in order of increasing values of
+	// `LogEntry.timestamp` (oldest first), and the second option returns entries
+	// in order of decreasing timestamps (newest first).  Entries with equal
+	// timestamps are returned in order of `LogEntry.insertId`.
+	OrderBy string `protobuf:"bytes,3,opt,name=order_by,json=orderBy" json:"order_by,omitempty"`
+	// Optional. The maximum number of results to return from this request.  Fewer
+	// results might be returned. You must check for the `nextPageToken` result to
+	// determine if additional results are available, which you can retrieve by
+	// passing the `nextPageToken` value in the `pageToken` parameter to the next
+	// request.
+	PageSize int32 `protobuf:"varint,4,opt,name=page_size,json=pageSize" json:"page_size,omitempty"`
+	// Optional. If the `pageToken` request parameter is supplied, then the next
+	// page of results in the set are retrieved.  The `pageToken` parameter must
+	// be set with the value of the `nextPageToken` result parameter from the
+	// previous request.  The values of `projectIds`, `filter`, and `orderBy` must
+	// be the same as in the previous request.
+	PageToken string `protobuf:"bytes,5,opt,name=page_token,json=pageToken" json:"page_token,omitempty"`
+}
+
+func (m *ListLogEntriesRequest) Reset()                    { *m = ListLogEntriesRequest{} }
+func (m *ListLogEntriesRequest) String() string            { return proto.CompactTextString(m) }
+func (*ListLogEntriesRequest) ProtoMessage()               {}
+func (*ListLogEntriesRequest) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{3} }
+
+// Result returned from `ListLogEntries`.
+type ListLogEntriesResponse struct {
+	// A list of log entries.
+	Entries []*LogEntry `protobuf:"bytes,1,rep,name=entries" json:"entries,omitempty"`
+	// If there are more results than were returned, then `nextPageToken` is
+	// given a value in the response.  To get the next batch of results, call
+	// this method again using the value of `nextPageToken` as `pageToken`.
+	NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken" json:"next_page_token,omitempty"`
+}
+
+func (m *ListLogEntriesResponse) Reset()                    { *m = ListLogEntriesResponse{} }
+func (m *ListLogEntriesResponse) String() string            { return proto.CompactTextString(m) }
+func (*ListLogEntriesResponse) ProtoMessage()               {}
+func (*ListLogEntriesResponse) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{4} }
+
+func (m *ListLogEntriesResponse) GetEntries() []*LogEntry {
+	if m != nil {
+		return m.Entries
+	}
+	return nil
+}
+
+// The parameters to ListMonitoredResourceDescriptors
+type ListMonitoredResourceDescriptorsRequest struct {
+	// Optional. The maximum number of results to return from this request.  Fewer
+	// results might be returned. You must check for the `nextPageToken` result to
+	// determine if additional results are available, which you can retrieve by
+	// passing the `nextPageToken` value in the `pageToken` parameter to the next
+	// request.
+	PageSize int32 `protobuf:"varint,1,opt,name=page_size,json=pageSize" json:"page_size,omitempty"`
+	// Optional. If the `pageToken` request parameter is supplied, then the next
+	// page of results in the set are retrieved.  The `pageToken` parameter must
+	// be set with the value of the `nextPageToken` result parameter from the
+	// previous request.
+	PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken" json:"page_token,omitempty"`
+}
+
+func (m *ListMonitoredResourceDescriptorsRequest) Reset() {
+	*m = ListMonitoredResourceDescriptorsRequest{}
+}
+func (m *ListMonitoredResourceDescriptorsRequest) String() string { return proto.CompactTextString(m) }
+func (*ListMonitoredResourceDescriptorsRequest) ProtoMessage()    {}
+func (*ListMonitoredResourceDescriptorsRequest) Descriptor() ([]byte, []int) {
+	return fileDescriptor3, []int{5}
+}
+
+// Result returned from ListMonitoredResourceDescriptors.
+type ListMonitoredResourceDescriptorsResponse struct {
+	// A list of resource descriptors.
+	ResourceDescriptors []*google_api1.MonitoredResourceDescriptor `protobuf:"bytes,1,rep,name=resource_descriptors,json=resourceDescriptors" json:"resource_descriptors,omitempty"`
+	// If there are more results than were returned, then `nextPageToken` is
+	// returned in the response.  To get the next batch of results, call this
+	// method again using the value of `nextPageToken` as `pageToken`.
+	NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken" json:"next_page_token,omitempty"`
+}
+
+func (m *ListMonitoredResourceDescriptorsResponse) Reset() {
+	*m = ListMonitoredResourceDescriptorsResponse{}
+}
+func (m *ListMonitoredResourceDescriptorsResponse) String() string { return proto.CompactTextString(m) }
+func (*ListMonitoredResourceDescriptorsResponse) ProtoMessage()    {}
+func (*ListMonitoredResourceDescriptorsResponse) Descriptor() ([]byte, []int) {
+	return fileDescriptor3, []int{6}
+}
+
+func (m *ListMonitoredResourceDescriptorsResponse) GetResourceDescriptors() []*google_api1.MonitoredResourceDescriptor {
+	if m != nil {
+		return m.ResourceDescriptors
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*DeleteLogRequest)(nil), "google.logging.v2.DeleteLogRequest")
+	proto.RegisterType((*WriteLogEntriesRequest)(nil), "google.logging.v2.WriteLogEntriesRequest")
+	proto.RegisterType((*WriteLogEntriesResponse)(nil), "google.logging.v2.WriteLogEntriesResponse")
+	proto.RegisterType((*ListLogEntriesRequest)(nil), "google.logging.v2.ListLogEntriesRequest")
+	proto.RegisterType((*ListLogEntriesResponse)(nil), "google.logging.v2.ListLogEntriesResponse")
+	proto.RegisterType((*ListMonitoredResourceDescriptorsRequest)(nil), "google.logging.v2.ListMonitoredResourceDescriptorsRequest")
+	proto.RegisterType((*ListMonitoredResourceDescriptorsResponse)(nil), "google.logging.v2.ListMonitoredResourceDescriptorsResponse")
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
+// Client API for LoggingServiceV2 service
+
+type LoggingServiceV2Client interface {
+	// Deletes a log and all its log entries.
+	// The log will reappear if it receives new entries.
+	//
+	DeleteLog(ctx context.Context, in *DeleteLogRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error)
+	// Writes log entries to Cloud Logging.
+	// All log entries in Cloud Logging are written by this method.
+	//
+	WriteLogEntries(ctx context.Context, in *WriteLogEntriesRequest, opts ...grpc.CallOption) (*WriteLogEntriesResponse, error)
+	// Lists log entries.  Use this method to retrieve log entries from Cloud
+	// Logging.  For ways to export log entries, see
+	// [Exporting Logs](/logging/docs/export).
+	//
+	ListLogEntries(ctx context.Context, in *ListLogEntriesRequest, opts ...grpc.CallOption) (*ListLogEntriesResponse, error)
+	// Lists monitored resource descriptors that are used by Cloud Logging.
+	ListMonitoredResourceDescriptors(ctx context.Context, in *ListMonitoredResourceDescriptorsRequest, opts ...grpc.CallOption) (*ListMonitoredResourceDescriptorsResponse, error)
+}
+
+type loggingServiceV2Client struct {
+	cc *grpc.ClientConn
+}
+
+func NewLoggingServiceV2Client(cc *grpc.ClientConn) LoggingServiceV2Client {
+	return &loggingServiceV2Client{cc}
+}
+
+func (c *loggingServiceV2Client) DeleteLog(ctx context.Context, in *DeleteLogRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error) {
+	out := new(google_protobuf4.Empty)
+	err := grpc.Invoke(ctx, "/google.logging.v2.LoggingServiceV2/DeleteLog", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *loggingServiceV2Client) WriteLogEntries(ctx context.Context, in *WriteLogEntriesRequest, opts ...grpc.CallOption) (*WriteLogEntriesResponse, error) {
+	out := new(WriteLogEntriesResponse)
+	err := grpc.Invoke(ctx, "/google.logging.v2.LoggingServiceV2/WriteLogEntries", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *loggingServiceV2Client) ListLogEntries(ctx context.Context, in *ListLogEntriesRequest, opts ...grpc.CallOption) (*ListLogEntriesResponse, error) {
+	out := new(ListLogEntriesResponse)
+	err := grpc.Invoke(ctx, "/google.logging.v2.LoggingServiceV2/ListLogEntries", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *loggingServiceV2Client) ListMonitoredResourceDescriptors(ctx context.Context, in *ListMonitoredResourceDescriptorsRequest, opts ...grpc.CallOption) (*ListMonitoredResourceDescriptorsResponse, error) {
+	out := new(ListMonitoredResourceDescriptorsResponse)
+	err := grpc.Invoke(ctx, "/google.logging.v2.LoggingServiceV2/ListMonitoredResourceDescriptors", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+// Server API for LoggingServiceV2 service
+
+type LoggingServiceV2Server interface {
+	// Deletes a log and all its log entries.
+	// The log will reappear if it receives new entries.
+	//
+	DeleteLog(context.Context, *DeleteLogRequest) (*google_protobuf4.Empty, error)
+	// Writes log entries to Cloud Logging.
+	// All log entries in Cloud Logging are written by this method.
+	//
+	WriteLogEntries(context.Context, *WriteLogEntriesRequest) (*WriteLogEntriesResponse, error)
+	// Lists log entries.  Use this method to retrieve log entries from Cloud
+	// Logging.  For ways to export log entries, see
+	// [Exporting Logs](/logging/docs/export).
+	//
+	ListLogEntries(context.Context, *ListLogEntriesRequest) (*ListLogEntriesResponse, error)
+	// Lists monitored resource descriptors that are used by Cloud Logging.
+	ListMonitoredResourceDescriptors(context.Context, *ListMonitoredResourceDescriptorsRequest) (*ListMonitoredResourceDescriptorsResponse, error)
+}
+
+func RegisterLoggingServiceV2Server(s *grpc.Server, srv LoggingServiceV2Server) {
+	s.RegisterService(&_LoggingServiceV2_serviceDesc, srv)
+}
+
+func _LoggingServiceV2_DeleteLog_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(DeleteLogRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(LoggingServiceV2Server).DeleteLog(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.LoggingServiceV2/DeleteLog",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(LoggingServiceV2Server).DeleteLog(ctx, req.(*DeleteLogRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _LoggingServiceV2_WriteLogEntries_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(WriteLogEntriesRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(LoggingServiceV2Server).WriteLogEntries(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.LoggingServiceV2/WriteLogEntries",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(LoggingServiceV2Server).WriteLogEntries(ctx, req.(*WriteLogEntriesRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _LoggingServiceV2_ListLogEntries_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(ListLogEntriesRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(LoggingServiceV2Server).ListLogEntries(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.LoggingServiceV2/ListLogEntries",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(LoggingServiceV2Server).ListLogEntries(ctx, req.(*ListLogEntriesRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _LoggingServiceV2_ListMonitoredResourceDescriptors_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(ListMonitoredResourceDescriptorsRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(LoggingServiceV2Server).ListMonitoredResourceDescriptors(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.LoggingServiceV2/ListMonitoredResourceDescriptors",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(LoggingServiceV2Server).ListMonitoredResourceDescriptors(ctx, req.(*ListMonitoredResourceDescriptorsRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+var _LoggingServiceV2_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "google.logging.v2.LoggingServiceV2",
+	HandlerType: (*LoggingServiceV2Server)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "DeleteLog",
+			Handler:    _LoggingServiceV2_DeleteLog_Handler,
+		},
+		{
+			MethodName: "WriteLogEntries",
+			Handler:    _LoggingServiceV2_WriteLogEntries_Handler,
+		},
+		{
+			MethodName: "ListLogEntries",
+			Handler:    _LoggingServiceV2_ListLogEntries_Handler,
+		},
+		{
+			MethodName: "ListMonitoredResourceDescriptors",
+			Handler:    _LoggingServiceV2_ListMonitoredResourceDescriptors_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{},
+}
+
+var fileDescriptor3 = []byte{
+	// 731 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x9c, 0x55, 0xcd, 0x6e, 0x12, 0x5f,
+	0x14, 0xcf, 0x85, 0x7e, 0x71, 0xfa, 0xff, 0xdb, 0x7a, 0x6d, 0x91, 0x0e, 0x36, 0xa5, 0xd3, 0x68,
+	0x29, 0x09, 0x83, 0x62, 0x9a, 0x58, 0x8c, 0x9b, 0xa6, 0x5d, 0x98, 0x50, 0xd3, 0x4c, 0x8d, 0x26,
+	0x8d, 0x09, 0x19, 0xe0, 0x74, 0xbc, 0x76, 0x98, 0x8b, 0x33, 0x17, 0x2a, 0x1a, 0x37, 0x6e, 0x5c,
+	0xb8, 0xf4, 0x21, 0xdc, 0xe9, 0x7b, 0xb8, 0xf5, 0x15, 0x7c, 0x00, 0x97, 0x2e, 0xbd, 0x33, 0x73,
+	0x87, 0x52, 0xc0, 0x96, 0xb8, 0x82, 0x73, 0xce, 0xef, 0x7c, 0xff, 0xce, 0x1d, 0x58, 0xb3, 0x39,
+	0xb7, 0x1d, 0x2c, 0x39, 0xdc, 0xb6, 0x99, 0x6b, 0x97, 0xba, 0xe5, 0xf8, 0xaf, 0xd1, 0xf6, 0xb8,
+	0xe0, 0xf4, 0x7a, 0x04, 0x30, 0x62, 0x6d, 0xb7, 0xac, 0xdd, 0x52, 0x3e, 0x56, 0x9b, 0x95, 0x2c,
+	0xd7, 0xe5, 0xc2, 0x12, 0x8c, 0xbb, 0x7e, 0xe4, 0xa0, 0x6d, 0x0c, 0x58, 0x5b, 0xdc, 0x65, 0x82,
+	0x7b, 0xd8, 0xac, 0x79, 0xe8, 0xf3, 0x8e, 0xd7, 0x40, 0x05, 0x5a, 0x1f, 0x9b, 0xb6, 0x86, 0xae,
+	0xf0, 0x7a, 0x0a, 0x92, 0x55, 0x90, 0x50, 0xaa, 0x77, 0x4e, 0x4a, 0xd8, 0x6a, 0x0b, 0x65, 0xd4,
+	0x8b, 0xb0, 0xb8, 0x87, 0x0e, 0x0a, 0xac, 0x72, 0xdb, 0xc4, 0xd7, 0x1d, 0xf4, 0x05, 0x5d, 0x81,
+	0xb9, 0x20, 0x86, 0x6b, 0xb5, 0x30, 0x43, 0x72, 0x24, 0x9f, 0x32, 0x67, 0xa5, 0xfc, 0x44, 0x8a,
+	0xfa, 0xb7, 0x04, 0xa4, 0x9f, 0x7b, 0x2c, 0x84, 0xef, 0xcb, 0x1c, 0x0c, 0xfd, 0xab, 0xbd, 0xe8,
+	0x0e, 0xcc, 0xc5, 0x65, 0x67, 0x12, 0xd2, 0x34, 0x5f, 0x5e, 0x35, 0xd4, 0x34, 0x64, 0x73, 0xc6,
+	0x41, 0xdc, 0x9c, 0xa9, 0x40, 0x66, 0x1f, 0x4e, 0x0f, 0x60, 0xc6, 0xb1, 0xea, 0xe8, 0xf8, 0x99,
+	0x64, 0x2e, 0x29, 0x1d, 0xb7, 0x8d, 0x91, 0x31, 0x1a, 0xe3, 0x0b, 0x32, 0xaa, 0xa1, 0x5f, 0xa0,
+	0xec, 0x99, 0x2a, 0x08, 0xdd, 0x86, 0x59, 0x8c, 0x50, 0x99, 0xa9, 0x30, 0x5e, 0x76, 0x4c, 0x3c,
+	0x15, 0xaa, 0x67, 0xc6, 0x58, 0x6d, 0x07, 0xe6, 0x07, 0xa2, 0xd1, 0x45, 0x48, 0x9e, 0x62, 0x4f,
+	0x75, 0x19, 0xfc, 0xa5, 0x4b, 0x30, 0xdd, 0xb5, 0x9c, 0x4e, 0xd4, 0x5e, 0xca, 0x8c, 0x84, 0x4a,
+	0xe2, 0x01, 0xd1, 0x57, 0xe0, 0xe6, 0x48, 0x7d, 0x7e, 0x5b, 0x6e, 0x19, 0xf5, 0x2f, 0x04, 0x96,
+	0xab, 0xcc, 0x17, 0xa3, 0xb3, 0x5c, 0x83, 0x79, 0xb9, 0x9e, 0x57, 0xd8, 0x10, 0x35, 0xd6, 0xf4,
+	0x65, 0xa2, 0xa4, 0x0c, 0x0a, 0x4a, 0xf5, 0xb8, 0xe9, 0xd3, 0x34, 0xcc, 0x9c, 0x30, 0x47, 0xa0,
+	0xa7, 0x12, 0x2a, 0x29, 0x58, 0x02, 0xf7, 0x9a, 0xe8, 0xd5, 0xea, 0x3d, 0x39, 0xb0, 0x70, 0x09,
+	0xa1, 0xbc, 0xdb, 0xa3, 0x59, 0x48, 0xb5, 0x2d, 0x1b, 0x6b, 0x3e, 0x7b, 0x8b, 0xb2, 0x79, 0x92,
+	0x9f, 0x36, 0xe7, 0x02, 0xc5, 0x91, 0x94, 0xe9, 0x2a, 0x40, 0x68, 0x14, 0xfc, 0x14, 0xdd, 0xcc,
+	0x74, 0xe8, 0x19, 0xc2, 0x9f, 0x06, 0x0a, 0xfd, 0x0c, 0xd2, 0xc3, 0x85, 0x46, 0x3d, 0x0c, 0x0e,
+	0x94, 0x4c, 0x3e, 0x50, 0x7a, 0x07, 0x16, 0x5c, 0x7c, 0x23, 0x6a, 0x03, 0x49, 0xa3, 0x46, 0xfe,
+	0x0f, 0xd4, 0x87, 0xfd, 0xc4, 0x08, 0x9b, 0x41, 0xe2, 0x11, 0x86, 0xec, 0xa1, 0xdf, 0xf0, 0x58,
+	0x5b, 0xea, 0xfa, 0x33, 0xbb, 0xd0, 0x1f, 0xb9, 0xb4, 0xbf, 0xc4, 0x70, 0x7f, 0x5f, 0x09, 0xe4,
+	0xaf, 0xce, 0xa3, 0x5a, 0x3e, 0x86, 0xa5, 0x98, 0x9e, 0xb5, 0xe6, 0xb9, 0x5d, 0xf5, 0xbf, 0x79,
+	0x29, 0xb3, 0xcf, 0xe3, 0x99, 0x37, 0xbc, 0xd1, 0x1c, 0x93, 0xce, 0xa5, 0xfc, 0x6b, 0x0a, 0x16,
+	0xab, 0xd1, 0x80, 0x8f, 0xd0, 0xeb, 0xb2, 0x06, 0x3e, 0x2b, 0xd3, 0x33, 0x48, 0xf5, 0x6f, 0x99,
+	0x6e, 0x8c, 0xd9, 0xc3, 0xf0, 0xa5, 0x6b, 0xe9, 0x18, 0x14, 0xbf, 0x0d, 0xc6, 0x7e, 0xf0, 0x36,
+	0xe8, 0xc5, 0x0f, 0x3f, 0x7e, 0x7e, 0x4e, 0x6c, 0x16, 0x6e, 0xcb, 0xf7, 0xa4, 0x8e, 0xc2, 0xba,
+	0x57, 0x7a, 0x17, 0xdf, 0xf6, 0x23, 0xc5, 0x42, 0xbf, 0x54, 0x08, 0x5e, 0x1a, 0xf9, 0xf3, 0x9e,
+	0x7e, 0x22, 0xb0, 0x30, 0x44, 0x72, 0xba, 0x35, 0xf1, 0xa1, 0x6a, 0x85, 0x49, 0xa0, 0xea, 0x66,
+	0xd6, 0xc3, 0xca, 0xb2, 0x7a, 0xba, 0x5f, 0x99, 0xa2, 0x54, 0xe5, 0x2c, 0xf0, 0xa8, 0x90, 0x02,
+	0xfd, 0x48, 0xe0, 0xda, 0x45, 0xb6, 0xd2, 0xfc, 0x38, 0x52, 0x8e, 0xbb, 0x3c, 0x6d, 0x6b, 0x02,
+	0xa4, 0x2a, 0x25, 0x17, 0x96, 0xa2, 0xe9, 0xcb, 0x23, 0xa5, 0x38, 0xd2, 0x21, 0xa8, 0xe4, 0x3b,
+	0x81, 0xdc, 0x55, 0xb4, 0xa2, 0x95, 0xbf, 0x64, 0x9c, 0x80, 0xf3, 0xda, 0xc3, 0x7f, 0xf2, 0x55,
+	0xf5, 0xab, 0x25, 0xd3, 0xf3, 0x25, 0xb7, 0x2e, 0x71, 0xdb, 0x7d, 0x01, 0xcb, 0x0d, 0xde, 0x1a,
+	0x4d, 0xb8, 0xfb, 0x9f, 0x22, 0xe2, 0x61, 0xc0, 0xa1, 0x43, 0x72, 0x7c, 0xd7, 0x66, 0xe2, 0x65,
+	0xa7, 0x6e, 0x48, 0x74, 0x29, 0x42, 0xcb, 0x53, 0xf0, 0xa3, 0xcf, 0x4f, 0xb1, 0xe1, 0x30, 0x39,
+	0xa5, 0xa2, 0xcd, 0x07, 0xbe, 0x58, 0xbf, 0x09, 0xa9, 0xcf, 0x84, 0xe6, 0xfb, 0x7f, 0x02, 0x00,
+	0x00, 0xff, 0xff, 0x4f, 0x70, 0x19, 0x9a, 0x47, 0x07, 0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_config.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_config.pb.go
new file mode 100644
index 0000000..a0653b3
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_config.pb.go
@@ -0,0 +1,476 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/v2/logging_config.proto
+// DO NOT EDIT!
+
+package v2
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/googleapis/proto-client-go/api"
+import google_protobuf4 "github.com/golang/protobuf/ptypes/empty"
+import _ "github.com/golang/protobuf/ptypes/timestamp"
+
+import (
+	context "golang.org/x/net/context"
+	grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// Available log entry formats. Log entries can be written to Cloud
+// Logging in either format and can be exported in either format.
+// Version 2 is the preferred format.
+type LogSink_VersionFormat int32
+
+const (
+	// An unspecified version format will default to V2.
+	LogSink_VERSION_FORMAT_UNSPECIFIED LogSink_VersionFormat = 0
+	// `LogEntry` version 2 format.
+	LogSink_V2 LogSink_VersionFormat = 1
+	// `LogEntry` version 1 format.
+	LogSink_V1 LogSink_VersionFormat = 2
+)
+
+var LogSink_VersionFormat_name = map[int32]string{
+	0: "VERSION_FORMAT_UNSPECIFIED",
+	1: "V2",
+	2: "V1",
+}
+var LogSink_VersionFormat_value = map[string]int32{
+	"VERSION_FORMAT_UNSPECIFIED": 0,
+	"V2": 1,
+	"V1": 2,
+}
+
+func (x LogSink_VersionFormat) String() string {
+	return proto.EnumName(LogSink_VersionFormat_name, int32(x))
+}
+func (LogSink_VersionFormat) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{0, 0} }
+
+// Describes a sink used to export log entries outside Cloud Logging.
+type LogSink struct {
+	// Required. The client-assigned sink identifier. Example:
+	// `"my-severe-errors-to-pubsub"`.
+	// Sink identifiers are limited to 1000 characters
+	// and can include only the following characters: `A-Z`, `a-z`,
+	// `0-9`, and the special characters `_-.`.
+	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+	// The export destination. See
+	// [Exporting Logs With Sinks](/logging/docs/api/tasks/exporting-logs).
+	// Examples: `"storage.googleapis.com/a-bucket"`,
+	// `"bigquery.googleapis.com/projects/a-project-id/datasets/a-dataset"`.
+	Destination string `protobuf:"bytes,3,opt,name=destination" json:"destination,omitempty"`
+	// An [advanced logs filter](/logging/docs/view/advanced_filters)
+	// that defines the log entries to be exported.  The filter must be
+	// consistent with the log entry format designed by the
+	// `outputVersionFormat` parameter, regardless of the format of the
+	// log entry that was originally written to Cloud Logging.
+	// Example: `"logName:syslog AND severity>=ERROR"`.
+	Filter string `protobuf:"bytes,5,opt,name=filter" json:"filter,omitempty"`
+	// The log entry version used when exporting log entries from this
+	// sink.  This version does not have to correspond to the version of
+	// the log entry when it was written to Cloud Logging.
+	OutputVersionFormat LogSink_VersionFormat `protobuf:"varint,6,opt,name=output_version_format,json=outputVersionFormat,enum=google.logging.v2.LogSink_VersionFormat" json:"output_version_format,omitempty"`
+}
+
+func (m *LogSink) Reset()                    { *m = LogSink{} }
+func (m *LogSink) String() string            { return proto.CompactTextString(m) }
+func (*LogSink) ProtoMessage()               {}
+func (*LogSink) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
+
+// The parameters to `ListSinks`.
+type ListSinksRequest struct {
+	// Required. The resource name of the project containing the sinks.
+	// Example: `"projects/my-logging-project"`, `"projects/01234567890"`.
+	ProjectName string `protobuf:"bytes,1,opt,name=project_name,json=projectName" json:"project_name,omitempty"`
+	// Optional. If the `pageToken` request parameter is supplied, then the next
+	// page of results in the set are retrieved.  The `pageToken` parameter must
+	// be set with the value of the `nextPageToken` result parameter from the
+	// previous request. The value of `projectName` must be the same as in the
+	// previous request.
+	PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken" json:"page_token,omitempty"`
+	// Optional. The maximum number of results to return from this request.  Fewer
+	// results might be returned. You must check for the `nextPageToken` result to
+	// determine if additional results are available, which you can retrieve by
+	// passing the `nextPageToken` value in the `pageToken` parameter to the next
+	// request.
+	PageSize int32 `protobuf:"varint,3,opt,name=page_size,json=pageSize" json:"page_size,omitempty"`
+}
+
+func (m *ListSinksRequest) Reset()                    { *m = ListSinksRequest{} }
+func (m *ListSinksRequest) String() string            { return proto.CompactTextString(m) }
+func (*ListSinksRequest) ProtoMessage()               {}
+func (*ListSinksRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
+
+// Result returned from `ListSinks`.
+type ListSinksResponse struct {
+	// A list of sinks.
+	Sinks []*LogSink `protobuf:"bytes,1,rep,name=sinks" json:"sinks,omitempty"`
+	// If there are more results than were returned, then `nextPageToken` is
+	// given a value in the response.  To get the next batch of results, call this
+	// method again using the value of `nextPageToken` as `pageToken`.
+	NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken" json:"next_page_token,omitempty"`
+}
+
+func (m *ListSinksResponse) Reset()                    { *m = ListSinksResponse{} }
+func (m *ListSinksResponse) String() string            { return proto.CompactTextString(m) }
+func (*ListSinksResponse) ProtoMessage()               {}
+func (*ListSinksResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{2} }
+
+func (m *ListSinksResponse) GetSinks() []*LogSink {
+	if m != nil {
+		return m.Sinks
+	}
+	return nil
+}
+
+// The parameters to `GetSink`.
+type GetSinkRequest struct {
+	// The resource name of the sink to return.
+	// Example: `"projects/my-project-id/sinks/my-sink-id"`.
+	SinkName string `protobuf:"bytes,1,opt,name=sink_name,json=sinkName" json:"sink_name,omitempty"`
+}
+
+func (m *GetSinkRequest) Reset()                    { *m = GetSinkRequest{} }
+func (m *GetSinkRequest) String() string            { return proto.CompactTextString(m) }
+func (*GetSinkRequest) ProtoMessage()               {}
+func (*GetSinkRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{3} }
+
+// The parameters to `CreateSink`.
+type CreateSinkRequest struct {
+	// The resource name of the project in which to create the sink.
+	// Example: `"projects/my-project-id"`.
+	//
+	// The new sink must be provided in the request.
+	ProjectName string `protobuf:"bytes,1,opt,name=project_name,json=projectName" json:"project_name,omitempty"`
+	// The new sink, which must not have an identifier that already
+	// exists.
+	Sink *LogSink `protobuf:"bytes,2,opt,name=sink" json:"sink,omitempty"`
+}
+
+func (m *CreateSinkRequest) Reset()                    { *m = CreateSinkRequest{} }
+func (m *CreateSinkRequest) String() string            { return proto.CompactTextString(m) }
+func (*CreateSinkRequest) ProtoMessage()               {}
+func (*CreateSinkRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{4} }
+
+func (m *CreateSinkRequest) GetSink() *LogSink {
+	if m != nil {
+		return m.Sink
+	}
+	return nil
+}
+
+// The parameters to `UpdateSink`.
+type UpdateSinkRequest struct {
+	// The resource name of the sink to update.
+	// Example: `"projects/my-project-id/sinks/my-sink-id"`.
+	//
+	// The updated sink must be provided in the request and have the
+	// same name that is specified in `sinkName`.  If the sink does not
+	// exist, it is created.
+	SinkName string `protobuf:"bytes,1,opt,name=sink_name,json=sinkName" json:"sink_name,omitempty"`
+	// The updated sink, whose name must be the same as the sink
+	// identifier in `sinkName`.  If `sinkName` does not exist, then
+	// this method creates a new sink.
+	Sink *LogSink `protobuf:"bytes,2,opt,name=sink" json:"sink,omitempty"`
+}
+
+func (m *UpdateSinkRequest) Reset()                    { *m = UpdateSinkRequest{} }
+func (m *UpdateSinkRequest) String() string            { return proto.CompactTextString(m) }
+func (*UpdateSinkRequest) ProtoMessage()               {}
+func (*UpdateSinkRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{5} }
+
+func (m *UpdateSinkRequest) GetSink() *LogSink {
+	if m != nil {
+		return m.Sink
+	}
+	return nil
+}
+
+// The parameters to `DeleteSink`.
+type DeleteSinkRequest struct {
+	// The resource name of the sink to delete.
+	// Example: `"projects/my-project-id/sinks/my-sink-id"`.
+	SinkName string `protobuf:"bytes,1,opt,name=sink_name,json=sinkName" json:"sink_name,omitempty"`
+}
+
+func (m *DeleteSinkRequest) Reset()                    { *m = DeleteSinkRequest{} }
+func (m *DeleteSinkRequest) String() string            { return proto.CompactTextString(m) }
+func (*DeleteSinkRequest) ProtoMessage()               {}
+func (*DeleteSinkRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{6} }
+
+func init() {
+	proto.RegisterType((*LogSink)(nil), "google.logging.v2.LogSink")
+	proto.RegisterType((*ListSinksRequest)(nil), "google.logging.v2.ListSinksRequest")
+	proto.RegisterType((*ListSinksResponse)(nil), "google.logging.v2.ListSinksResponse")
+	proto.RegisterType((*GetSinkRequest)(nil), "google.logging.v2.GetSinkRequest")
+	proto.RegisterType((*CreateSinkRequest)(nil), "google.logging.v2.CreateSinkRequest")
+	proto.RegisterType((*UpdateSinkRequest)(nil), "google.logging.v2.UpdateSinkRequest")
+	proto.RegisterType((*DeleteSinkRequest)(nil), "google.logging.v2.DeleteSinkRequest")
+	proto.RegisterEnum("google.logging.v2.LogSink_VersionFormat", LogSink_VersionFormat_name, LogSink_VersionFormat_value)
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
+// Client API for ConfigServiceV2 service
+
+type ConfigServiceV2Client interface {
+	// Lists sinks.
+	ListSinks(ctx context.Context, in *ListSinksRequest, opts ...grpc.CallOption) (*ListSinksResponse, error)
+	// Gets a sink.
+	GetSink(ctx context.Context, in *GetSinkRequest, opts ...grpc.CallOption) (*LogSink, error)
+	// Creates a sink.
+	CreateSink(ctx context.Context, in *CreateSinkRequest, opts ...grpc.CallOption) (*LogSink, error)
+	// Creates or updates a sink.
+	UpdateSink(ctx context.Context, in *UpdateSinkRequest, opts ...grpc.CallOption) (*LogSink, error)
+	// Deletes a sink.
+	DeleteSink(ctx context.Context, in *DeleteSinkRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error)
+}
+
+type configServiceV2Client struct {
+	cc *grpc.ClientConn
+}
+
+func NewConfigServiceV2Client(cc *grpc.ClientConn) ConfigServiceV2Client {
+	return &configServiceV2Client{cc}
+}
+
+func (c *configServiceV2Client) ListSinks(ctx context.Context, in *ListSinksRequest, opts ...grpc.CallOption) (*ListSinksResponse, error) {
+	out := new(ListSinksResponse)
+	err := grpc.Invoke(ctx, "/google.logging.v2.ConfigServiceV2/ListSinks", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *configServiceV2Client) GetSink(ctx context.Context, in *GetSinkRequest, opts ...grpc.CallOption) (*LogSink, error) {
+	out := new(LogSink)
+	err := grpc.Invoke(ctx, "/google.logging.v2.ConfigServiceV2/GetSink", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *configServiceV2Client) CreateSink(ctx context.Context, in *CreateSinkRequest, opts ...grpc.CallOption) (*LogSink, error) {
+	out := new(LogSink)
+	err := grpc.Invoke(ctx, "/google.logging.v2.ConfigServiceV2/CreateSink", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *configServiceV2Client) UpdateSink(ctx context.Context, in *UpdateSinkRequest, opts ...grpc.CallOption) (*LogSink, error) {
+	out := new(LogSink)
+	err := grpc.Invoke(ctx, "/google.logging.v2.ConfigServiceV2/UpdateSink", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *configServiceV2Client) DeleteSink(ctx context.Context, in *DeleteSinkRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error) {
+	out := new(google_protobuf4.Empty)
+	err := grpc.Invoke(ctx, "/google.logging.v2.ConfigServiceV2/DeleteSink", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+// Server API for ConfigServiceV2 service
+
+type ConfigServiceV2Server interface {
+	// Lists sinks.
+	ListSinks(context.Context, *ListSinksRequest) (*ListSinksResponse, error)
+	// Gets a sink.
+	GetSink(context.Context, *GetSinkRequest) (*LogSink, error)
+	// Creates a sink.
+	CreateSink(context.Context, *CreateSinkRequest) (*LogSink, error)
+	// Creates or updates a sink.
+	UpdateSink(context.Context, *UpdateSinkRequest) (*LogSink, error)
+	// Deletes a sink.
+	DeleteSink(context.Context, *DeleteSinkRequest) (*google_protobuf4.Empty, error)
+}
+
+func RegisterConfigServiceV2Server(s *grpc.Server, srv ConfigServiceV2Server) {
+	s.RegisterService(&_ConfigServiceV2_serviceDesc, srv)
+}
+
+func _ConfigServiceV2_ListSinks_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(ListSinksRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(ConfigServiceV2Server).ListSinks(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.ConfigServiceV2/ListSinks",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(ConfigServiceV2Server).ListSinks(ctx, req.(*ListSinksRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _ConfigServiceV2_GetSink_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(GetSinkRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(ConfigServiceV2Server).GetSink(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.ConfigServiceV2/GetSink",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(ConfigServiceV2Server).GetSink(ctx, req.(*GetSinkRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _ConfigServiceV2_CreateSink_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(CreateSinkRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(ConfigServiceV2Server).CreateSink(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.ConfigServiceV2/CreateSink",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(ConfigServiceV2Server).CreateSink(ctx, req.(*CreateSinkRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _ConfigServiceV2_UpdateSink_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(UpdateSinkRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(ConfigServiceV2Server).UpdateSink(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.ConfigServiceV2/UpdateSink",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(ConfigServiceV2Server).UpdateSink(ctx, req.(*UpdateSinkRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _ConfigServiceV2_DeleteSink_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(DeleteSinkRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(ConfigServiceV2Server).DeleteSink(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.ConfigServiceV2/DeleteSink",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(ConfigServiceV2Server).DeleteSink(ctx, req.(*DeleteSinkRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+var _ConfigServiceV2_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "google.logging.v2.ConfigServiceV2",
+	HandlerType: (*ConfigServiceV2Server)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "ListSinks",
+			Handler:    _ConfigServiceV2_ListSinks_Handler,
+		},
+		{
+			MethodName: "GetSink",
+			Handler:    _ConfigServiceV2_GetSink_Handler,
+		},
+		{
+			MethodName: "CreateSink",
+			Handler:    _ConfigServiceV2_CreateSink_Handler,
+		},
+		{
+			MethodName: "UpdateSink",
+			Handler:    _ConfigServiceV2_UpdateSink_Handler,
+		},
+		{
+			MethodName: "DeleteSink",
+			Handler:    _ConfigServiceV2_DeleteSink_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{},
+}
+
+var fileDescriptor1 = []byte{
+	// 690 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x55, 0xdf, 0x4e, 0x13, 0x4f,
+	0x14, 0xfe, 0x6d, 0xa1, 0x05, 0x0e, 0x3f, 0xa0, 0x1d, 0x03, 0x69, 0x16, 0xff, 0xc0, 0x6a, 0xb0,
+	0x92, 0xb0, 0x5b, 0xd6, 0xe8, 0x85, 0x89, 0x31, 0x02, 0xc5, 0x90, 0x20, 0x90, 0x2d, 0xf4, 0x82,
+	0x98, 0xac, 0xdb, 0x32, 0x5d, 0x47, 0xba, 0x3b, 0x4b, 0x77, 0xda, 0x88, 0xc4, 0x1b, 0x13, 0x8d,
+	0xf7, 0x3e, 0x83, 0x4f, 0xe4, 0x2b, 0xf8, 0x0c, 0x5e, 0x3b, 0x3b, 0xb3, 0xa5, 0x4b, 0x5b, 0x37,
+	0xf5, 0xaa, 0x3b, 0xe7, 0x7c, 0x33, 0xdf, 0x77, 0xbe, 0x73, 0x3a, 0x03, 0x6b, 0x2e, 0xa5, 0x6e,
+	0x0b, 0x1b, 0x2d, 0xea, 0xba, 0xc4, 0x77, 0x8d, 0xae, 0xd9, 0xfb, 0xb4, 0x1b, 0xd4, 0x6f, 0x12,
+	0x57, 0x0f, 0xda, 0x94, 0x51, 0x54, 0x90, 0x38, 0x3d, 0x4e, 0xea, 0x5d, 0x53, 0xbd, 0x1d, 0x6f,
+	0x75, 0x02, 0x62, 0x38, 0xbe, 0x4f, 0x99, 0xc3, 0x08, 0xf5, 0x43, 0xb9, 0x41, 0x5d, 0x8e, 0xb3,
+	0x62, 0x55, 0xef, 0x34, 0x0d, 0xec, 0x05, 0xec, 0x32, 0x4e, 0xde, 0x1b, 0x4c, 0x32, 0xe2, 0xe1,
+	0x90, 0x39, 0x5e, 0x20, 0x01, 0xda, 0x6f, 0x05, 0xa6, 0xf6, 0xa9, 0x5b, 0x25, 0xfe, 0x39, 0x42,
+	0x30, 0xe9, 0x3b, 0x1e, 0x2e, 0x2a, 0x2b, 0x4a, 0x69, 0xc6, 0x12, 0xdf, 0x68, 0x05, 0x66, 0xcf,
+	0xf8, 0x06, 0xe2, 0x0b, 0xce, 0xe2, 0x84, 0x48, 0x25, 0x43, 0x68, 0x09, 0x72, 0x4d, 0xd2, 0x62,
+	0xb8, 0x5d, 0xcc, 0x8a, 0x64, 0xbc, 0x42, 0x6f, 0x60, 0x91, 0x76, 0x58, 0xd0, 0x61, 0x76, 0x17,
+	0xb7, 0x43, 0x8e, 0xb4, 0x9b, 0xb4, 0xed, 0x39, 0xac, 0x98, 0xe3, 0xb0, 0x79, 0xb3, 0xa4, 0x0f,
+	0x15, 0xaa, 0xc7, 0x42, 0xf4, 0x9a, 0xdc, 0xb0, 0x2b, 0xf0, 0xd6, 0x2d, 0x79, 0xcc, 0x8d, 0xa0,
+	0xf6, 0x02, 0xe6, 0x6e, 0x04, 0xd0, 0x5d, 0x50, 0x6b, 0x15, 0xab, 0xba, 0x77, 0x78, 0x60, 0xef,
+	0x1e, 0x5a, 0xaf, 0x5f, 0x1e, 0xdb, 0x27, 0x07, 0xd5, 0xa3, 0xca, 0xf6, 0xde, 0xee, 0x5e, 0x65,
+	0x27, 0xff, 0x1f, 0xca, 0x41, 0xa6, 0x66, 0xe6, 0x15, 0xf1, 0xbb, 0x99, 0xcf, 0x68, 0x17, 0x90,
+	0xdf, 0x27, 0x21, 0x8b, 0xf8, 0x42, 0x0b, 0x5f, 0x74, 0x78, 0x45, 0x68, 0x15, 0xfe, 0xe7, 0xae,
+	0xbc, 0xc7, 0x0d, 0x66, 0x27, 0x8c, 0x98, 0x8d, 0x63, 0x07, 0x91, 0x1f, 0x77, 0x00, 0x02, 0xc7,
+	0xc5, 0x36, 0xa3, 0xe7, 0xd8, 0x2f, 0x66, 0x04, 0x60, 0x26, 0x8a, 0x1c, 0x47, 0x01, 0xb4, 0x0c,
+	0x62, 0x61, 0x87, 0xe4, 0x23, 0x16, 0x66, 0x65, 0xad, 0xe9, 0x28, 0x50, 0xe5, 0x6b, 0xcd, 0x83,
+	0x42, 0x82, 0x32, 0x0c, 0x78, 0x0f, 0x31, 0x2a, 0x43, 0x36, 0x8c, 0x02, 0x9c, 0x6c, 0xa2, 0x34,
+	0x6b, 0xaa, 0x7f, 0xb7, 0xc5, 0x92, 0x40, 0xb4, 0x06, 0x0b, 0x3e, 0xfe, 0xc0, 0xec, 0x21, 0x1d,
+	0x73, 0x51, 0xf8, 0xa8, 0xa7, 0x45, 0xdb, 0x80, 0xf9, 0x57, 0x58, 0xb0, 0xf5, 0xea, 0xe3, 0xea,
+	0xa2, 0x23, 0x92, 0xc5, 0x4d, 0x47, 0x81, 0xa8, 0x32, 0xad, 0x09, 0x85, 0xed, 0x36, 0x76, 0x18,
+	0x4e, 0xee, 0x18, 0xc3, 0x11, 0x1d, 0x26, 0xa3, 0x33, 0x84, 0x86, 0x74, 0xfd, 0x02, 0xa7, 0xbd,
+	0x85, 0xc2, 0x49, 0x70, 0x36, 0xc0, 0x93, 0xa6, 0xec, 0x9f, 0x19, 0xca, 0x50, 0xd8, 0xc1, 0x2d,
+	0x3c, 0x3e, 0x83, 0xf9, 0x23, 0x0b, 0x0b, 0xdb, 0xe2, 0x5f, 0x58, 0xc5, 0xed, 0x2e, 0x69, 0xe0,
+	0x9a, 0x89, 0xbe, 0x29, 0x30, 0x73, 0xdd, 0x2e, 0x74, 0x7f, 0x14, 0xeb, 0xc0, 0xfc, 0xa8, 0x0f,
+	0xd2, 0x41, 0xb2, 0xe3, 0x5a, 0xf9, 0xf3, 0xcf, 0x5f, 0xdf, 0x33, 0xeb, 0xa8, 0xc4, 0xef, 0x80,
+	0x3a, 0x66, 0xce, 0xa6, 0x71, 0x95, 0xf4, 0xf8, 0x79, 0xbc, 0x08, 0x8d, 0xf5, 0x4f, 0x86, 0xec,
+	0xf8, 0x25, 0x4c, 0xc5, 0x9d, 0x44, 0xab, 0x23, 0x28, 0x6e, 0x76, 0x59, 0x4d, 0x31, 0x48, 0x33,
+	0x04, 0xf7, 0x23, 0xf4, 0xb0, 0xcf, 0x7d, 0xed, 0x4a, 0x82, 0x58, 0xf2, 0x72, 0x01, 0xe8, 0xab,
+	0x02, 0xd0, 0x1f, 0x0b, 0x34, 0xaa, 0xc2, 0xa1, 0xa9, 0x49, 0x55, 0xf0, 0x54, 0x28, 0x28, 0x6b,
+	0x63, 0x57, 0xff, 0x4c, 0x34, 0x15, 0x7d, 0xe1, 0x42, 0xfa, 0x73, 0x33, 0x52, 0xc8, 0xd0, 0x58,
+	0xa5, 0x0a, 0x79, 0x22, 0x84, 0x18, 0xea, 0xb8, 0x56, 0xc4, 0x3a, 0xae, 0x00, 0xfa, 0xc3, 0x35,
+	0x52, 0xc6, 0xd0, 0xec, 0xa9, 0x4b, 0x3d, 0x54, 0xef, 0x1a, 0xd6, 0x2b, 0xd1, 0x1d, 0xdd, 0xeb,
+	0xc6, 0xfa, 0xb8, 0x12, 0xb6, 0x4e, 0x61, 0xb1, 0x41, 0xbd, 0x61, 0xce, 0xad, 0xb9, 0x7d, 0xf9,
+	0x2d, 0x87, 0xf8, 0x48, 0x39, 0x2d, 0xbb, 0x84, 0xbd, 0xeb, 0xd4, 0x75, 0x0e, 0x37, 0x24, 0x9c,
+	0xbf, 0x1e, 0xa1, 0x7c, 0x07, 0x36, 0x1a, 0x2d, 0x82, 0x7d, 0xb6, 0xe1, 0xd2, 0xc4, 0x6b, 0x54,
+	0xcf, 0x89, 0xdc, 0xe3, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x3f, 0x6e, 0x9c, 0x64, 0xa9, 0x06,
+	0x00, 0x00,
+}
diff --git a/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_metrics.pb.go b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_metrics.pb.go
new file mode 100644
index 0000000..55905de
--- /dev/null
+++ b/go/src/github.com/googleapis/proto-client-go/logging/v2/logging_metrics.pb.go
@@ -0,0 +1,429 @@
+// Code generated by protoc-gen-go.
+// source: google/logging/v2/logging_metrics.proto
+// DO NOT EDIT!
+
+package v2
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import _ "github.com/googleapis/proto-client-go/api"
+import google_protobuf4 "github.com/golang/protobuf/ptypes/empty"
+
+import (
+	context "golang.org/x/net/context"
+	grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// Describes a logs-based metric.  The value of the metric is the
+// number of log entries that match a logs filter.
+type LogMetric struct {
+	// Required. The client-assigned metric identifier. Example:
+	// `"severe_errors"`.  Metric identifiers are limited to 1000
+	// characters and can include only the following characters: `A-Z`,
+	// `a-z`, `0-9`, and the special characters `_-.,+!*',()%/\`.  The
+	// forward-slash character (`/`) denotes a hierarchy of name pieces,
+	// and it cannot be the first character of the name.
+	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+	// A description of this metric, which is used in documentation.
+	Description string `protobuf:"bytes,2,opt,name=description" json:"description,omitempty"`
+	// An [advanced logs filter](/logging/docs/view/advanced_filters).
+	// Example: `"logName:syslog AND severity>=ERROR"`.
+	Filter string `protobuf:"bytes,3,opt,name=filter" json:"filter,omitempty"`
+}
+
+func (m *LogMetric) Reset()                    { *m = LogMetric{} }
+func (m *LogMetric) String() string            { return proto.CompactTextString(m) }
+func (*LogMetric) ProtoMessage()               {}
+func (*LogMetric) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} }
+
+// The parameters to ListLogMetrics.
+type ListLogMetricsRequest struct {
+	// Required. The resource name of the project containing the metrics.
+	// Example: `"projects/my-project-id"`.
+	ProjectName string `protobuf:"bytes,1,opt,name=project_name,json=projectName" json:"project_name,omitempty"`
+	// Optional. If the `pageToken` request parameter is supplied, then the next
+	// page of results in the set are retrieved.  The `pageToken` parameter must
+	// be set with the value of the `nextPageToken` result parameter from the
+	// previous request.  The value of `projectName` must
+	// be the same as in the previous request.
+	PageToken string `protobuf:"bytes,2,opt,name=page_token,json=pageToken" json:"page_token,omitempty"`
+	// Optional. The maximum number of results to return from this request.  Fewer
+	// results might be returned. You must check for the `nextPageToken` result to
+	// determine if additional results are available, which you can retrieve by
+	// passing the `nextPageToken` value in the `pageToken` parameter to the next
+	// request.
+	PageSize int32 `protobuf:"varint,3,opt,name=page_size,json=pageSize" json:"page_size,omitempty"`
+}
+
+func (m *ListLogMetricsRequest) Reset()                    { *m = ListLogMetricsRequest{} }
+func (m *ListLogMetricsRequest) String() string            { return proto.CompactTextString(m) }
+func (*ListLogMetricsRequest) ProtoMessage()               {}
+func (*ListLogMetricsRequest) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{1} }
+
+// Result returned from ListLogMetrics.
+type ListLogMetricsResponse struct {
+	// A list of logs-based metrics.
+	Metrics []*LogMetric `protobuf:"bytes,1,rep,name=metrics" json:"metrics,omitempty"`
+	// If there are more results than were returned, then `nextPageToken` is given
+	// a value in the response.  To get the next batch of results, call this
+	// method again using the value of `nextPageToken` as `pageToken`.
+	NextPageToken string `protobuf:"bytes,2,opt,name=next_page_token,json=nextPageToken" json:"next_page_token,omitempty"`
+}
+
+func (m *ListLogMetricsResponse) Reset()                    { *m = ListLogMetricsResponse{} }
+func (m *ListLogMetricsResponse) String() string            { return proto.CompactTextString(m) }
+func (*ListLogMetricsResponse) ProtoMessage()               {}
+func (*ListLogMetricsResponse) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{2} }
+
+func (m *ListLogMetricsResponse) GetMetrics() []*LogMetric {
+	if m != nil {
+		return m.Metrics
+	}
+	return nil
+}
+
+// The parameters to GetLogMetric.
+type GetLogMetricRequest struct {
+	// The resource name of the desired metric.
+	// Example: `"projects/my-project-id/metrics/my-metric-id"`.
+	MetricName string `protobuf:"bytes,1,opt,name=metric_name,json=metricName" json:"metric_name,omitempty"`
+}
+
+func (m *GetLogMetricRequest) Reset()                    { *m = GetLogMetricRequest{} }
+func (m *GetLogMetricRequest) String() string            { return proto.CompactTextString(m) }
+func (*GetLogMetricRequest) ProtoMessage()               {}
+func (*GetLogMetricRequest) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{3} }
+
+// The parameters to CreateLogMetric.
+type CreateLogMetricRequest struct {
+	// The resource name of the project in which to create the metric.
+	// Example: `"projects/my-project-id"`.
+	//
+	// The new metric must be provided in the request.
+	ProjectName string `protobuf:"bytes,1,opt,name=project_name,json=projectName" json:"project_name,omitempty"`
+	// The new logs-based metric, which must not have an identifier that
+	// already exists.
+	Metric *LogMetric `protobuf:"bytes,2,opt,name=metric" json:"metric,omitempty"`
+}
+
+func (m *CreateLogMetricRequest) Reset()                    { *m = CreateLogMetricRequest{} }
+func (m *CreateLogMetricRequest) String() string            { return proto.CompactTextString(m) }
+func (*CreateLogMetricRequest) ProtoMessage()               {}
+func (*CreateLogMetricRequest) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{4} }
+
+func (m *CreateLogMetricRequest) GetMetric() *LogMetric {
+	if m != nil {
+		return m.Metric
+	}
+	return nil
+}
+
+// The parameters to UpdateLogMetric.
+//
+type UpdateLogMetricRequest struct {
+	// The resource name of the metric to update.
+	// Example: `"projects/my-project-id/metrics/my-metric-id"`.
+	//
+	// The updated metric must be provided in the request and have the
+	// same identifier that is specified in `metricName`.
+	// If the metric does not exist, it is created.
+	MetricName string `protobuf:"bytes,1,opt,name=metric_name,json=metricName" json:"metric_name,omitempty"`
+	// The updated metric, whose name must be the same as the
+	// metric identifier in `metricName`. If `metricName` does not
+	// exist, then a new metric is created.
+	Metric *LogMetric `protobuf:"bytes,2,opt,name=metric" json:"metric,omitempty"`
+}
+
+func (m *UpdateLogMetricRequest) Reset()                    { *m = UpdateLogMetricRequest{} }
+func (m *UpdateLogMetricRequest) String() string            { return proto.CompactTextString(m) }
+func (*UpdateLogMetricRequest) ProtoMessage()               {}
+func (*UpdateLogMetricRequest) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{5} }
+
+func (m *UpdateLogMetricRequest) GetMetric() *LogMetric {
+	if m != nil {
+		return m.Metric
+	}
+	return nil
+}
+
+// The parameters to DeleteLogMetric.
+type DeleteLogMetricRequest struct {
+	// The resource name of the metric to delete.
+	// Example: `"projects/my-project-id/metrics/my-metric-id"`.
+	MetricName string `protobuf:"bytes,1,opt,name=metric_name,json=metricName" json:"metric_name,omitempty"`
+}
+
+func (m *DeleteLogMetricRequest) Reset()                    { *m = DeleteLogMetricRequest{} }
+func (m *DeleteLogMetricRequest) String() string            { return proto.CompactTextString(m) }
+func (*DeleteLogMetricRequest) ProtoMessage()               {}
+func (*DeleteLogMetricRequest) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{6} }
+
+func init() {
+	proto.RegisterType((*LogMetric)(nil), "google.logging.v2.LogMetric")
+	proto.RegisterType((*ListLogMetricsRequest)(nil), "google.logging.v2.ListLogMetricsRequest")
+	proto.RegisterType((*ListLogMetricsResponse)(nil), "google.logging.v2.ListLogMetricsResponse")
+	proto.RegisterType((*GetLogMetricRequest)(nil), "google.logging.v2.GetLogMetricRequest")
+	proto.RegisterType((*CreateLogMetricRequest)(nil), "google.logging.v2.CreateLogMetricRequest")
+	proto.RegisterType((*UpdateLogMetricRequest)(nil), "google.logging.v2.UpdateLogMetricRequest")
+	proto.RegisterType((*DeleteLogMetricRequest)(nil), "google.logging.v2.DeleteLogMetricRequest")
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
+// Client API for MetricsServiceV2 service
+
+type MetricsServiceV2Client interface {
+	// Lists logs-based metrics.
+	ListLogMetrics(ctx context.Context, in *ListLogMetricsRequest, opts ...grpc.CallOption) (*ListLogMetricsResponse, error)
+	// Gets a logs-based metric.
+	GetLogMetric(ctx context.Context, in *GetLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error)
+	// Creates a logs-based metric.
+	CreateLogMetric(ctx context.Context, in *CreateLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error)
+	// Creates or updates a logs-based metric.
+	UpdateLogMetric(ctx context.Context, in *UpdateLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error)
+	// Deletes a logs-based metric.
+	DeleteLogMetric(ctx context.Context, in *DeleteLogMetricRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error)
+}
+
+type metricsServiceV2Client struct {
+	cc *grpc.ClientConn
+}
+
+func NewMetricsServiceV2Client(cc *grpc.ClientConn) MetricsServiceV2Client {
+	return &metricsServiceV2Client{cc}
+}
+
+func (c *metricsServiceV2Client) ListLogMetrics(ctx context.Context, in *ListLogMetricsRequest, opts ...grpc.CallOption) (*ListLogMetricsResponse, error) {
+	out := new(ListLogMetricsResponse)
+	err := grpc.Invoke(ctx, "/google.logging.v2.MetricsServiceV2/ListLogMetrics", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *metricsServiceV2Client) GetLogMetric(ctx context.Context, in *GetLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error) {
+	out := new(LogMetric)
+	err := grpc.Invoke(ctx, "/google.logging.v2.MetricsServiceV2/GetLogMetric", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *metricsServiceV2Client) CreateLogMetric(ctx context.Context, in *CreateLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error) {
+	out := new(LogMetric)
+	err := grpc.Invoke(ctx, "/google.logging.v2.MetricsServiceV2/CreateLogMetric", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *metricsServiceV2Client) UpdateLogMetric(ctx context.Context, in *UpdateLogMetricRequest, opts ...grpc.CallOption) (*LogMetric, error) {
+	out := new(LogMetric)
+	err := grpc.Invoke(ctx, "/google.logging.v2.MetricsServiceV2/UpdateLogMetric", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *metricsServiceV2Client) DeleteLogMetric(ctx context.Context, in *DeleteLogMetricRequest, opts ...grpc.CallOption) (*google_protobuf4.Empty, error) {
+	out := new(google_protobuf4.Empty)
+	err := grpc.Invoke(ctx, "/google.logging.v2.MetricsServiceV2/DeleteLogMetric", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+// Server API for MetricsServiceV2 service
+
+type MetricsServiceV2Server interface {
+	// Lists logs-based metrics.
+	ListLogMetrics(context.Context, *ListLogMetricsRequest) (*ListLogMetricsResponse, error)
+	// Gets a logs-based metric.
+	GetLogMetric(context.Context, *GetLogMetricRequest) (*LogMetric, error)
+	// Creates a logs-based metric.
+	CreateLogMetric(context.Context, *CreateLogMetricRequest) (*LogMetric, error)
+	// Creates or updates a logs-based metric.
+	UpdateLogMetric(context.Context, *UpdateLogMetricRequest) (*LogMetric, error)
+	// Deletes a logs-based metric.
+	DeleteLogMetric(context.Context, *DeleteLogMetricRequest) (*google_protobuf4.Empty, error)
+}
+
+func RegisterMetricsServiceV2Server(s *grpc.Server, srv MetricsServiceV2Server) {
+	s.RegisterService(&_MetricsServiceV2_serviceDesc, srv)
+}
+
+func _MetricsServiceV2_ListLogMetrics_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(ListLogMetricsRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceV2Server).ListLogMetrics(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.MetricsServiceV2/ListLogMetrics",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceV2Server).ListLogMetrics(ctx, req.(*ListLogMetricsRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _MetricsServiceV2_GetLogMetric_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(GetLogMetricRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceV2Server).GetLogMetric(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.MetricsServiceV2/GetLogMetric",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceV2Server).GetLogMetric(ctx, req.(*GetLogMetricRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _MetricsServiceV2_CreateLogMetric_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(CreateLogMetricRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceV2Server).CreateLogMetric(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.MetricsServiceV2/CreateLogMetric",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceV2Server).CreateLogMetric(ctx, req.(*CreateLogMetricRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _MetricsServiceV2_UpdateLogMetric_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(UpdateLogMetricRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceV2Server).UpdateLogMetric(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.MetricsServiceV2/UpdateLogMetric",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceV2Server).UpdateLogMetric(ctx, req.(*UpdateLogMetricRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _MetricsServiceV2_DeleteLogMetric_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(DeleteLogMetricRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceV2Server).DeleteLogMetric(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.logging.v2.MetricsServiceV2/DeleteLogMetric",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceV2Server).DeleteLogMetric(ctx, req.(*DeleteLogMetricRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+var _MetricsServiceV2_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "google.logging.v2.MetricsServiceV2",
+	HandlerType: (*MetricsServiceV2Server)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "ListLogMetrics",
+			Handler:    _MetricsServiceV2_ListLogMetrics_Handler,
+		},
+		{
+			MethodName: "GetLogMetric",
+			Handler:    _MetricsServiceV2_GetLogMetric_Handler,
+		},
+		{
+			MethodName: "CreateLogMetric",
+			Handler:    _MetricsServiceV2_CreateLogMetric_Handler,
+		},
+		{
+			MethodName: "UpdateLogMetric",
+			Handler:    _MetricsServiceV2_UpdateLogMetric_Handler,
+		},
+		{
+			MethodName: "DeleteLogMetric",
+			Handler:    _MetricsServiceV2_DeleteLogMetric_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{},
+}
+
+var fileDescriptor2 = []byte{
+	// 583 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x9c, 0x95, 0xcf, 0x6e, 0xd3, 0x40,
+	0x10, 0xc6, 0xe5, 0x96, 0x06, 0x32, 0x29, 0x04, 0x16, 0xd5, 0x8a, 0xdc, 0x22, 0x8a, 0x0f, 0x25,
+	0x04, 0x62, 0x83, 0x5b, 0x2a, 0x51, 0xc4, 0x85, 0x3f, 0xe2, 0x40, 0x41, 0x55, 0x0a, 0x48, 0x70,
+	0x89, 0x1c, 0x77, 0x6a, 0x16, 0x1c, 0xaf, 0x6b, 0x6f, 0xa2, 0x02, 0xe2, 0xc2, 0x8d, 0x33, 0x12,
+	0x88, 0xe7, 0xe2, 0x15, 0x78, 0x09, 0x6e, 0xd8, 0xeb, 0x75, 0x6a, 0x92, 0x55, 0xd3, 0xf4, 0xe6,
+	0x9d, 0xd9, 0xdd, 0xef, 0x37, 0x33, 0x5f, 0x36, 0x70, 0xdd, 0x67, 0xcc, 0x0f, 0xd0, 0x0e, 0x98,
+	0xef, 0xd3, 0xd0, 0xb7, 0x87, 0x4e, 0xf1, 0xd9, 0xed, 0x23, 0x8f, 0xa9, 0x97, 0x58, 0x51, 0xcc,
+	0x38, 0x23, 0x97, 0xf2, 0x8d, 0x96, 0xcc, 0x5a, 0x43, 0xc7, 0x58, 0x91, 0x67, 0xdd, 0x88, 0xda,
+	0x6e, 0x18, 0x32, 0xee, 0x72, 0xca, 0x42, 0x79, 0xc0, 0x58, 0x96, 0x59, 0xb1, 0xea, 0x0d, 0xf6,
+	0x6d, 0xec, 0x47, 0xfc, 0x63, 0x9e, 0x34, 0xdf, 0x40, 0x75, 0x9b, 0xf9, 0xcf, 0x85, 0x02, 0x21,
+	0x70, 0x26, 0x74, 0xfb, 0xd8, 0xd0, 0x56, 0xb5, 0x66, 0xb5, 0x23, 0xbe, 0xc9, 0x2a, 0xd4, 0xf6,
+	0x30, 0xf1, 0x62, 0x1a, 0x65, 0x77, 0x36, 0xe6, 0x44, 0xaa, 0x1c, 0x22, 0x3a, 0x54, 0xf6, 0x69,
+	0xc0, 0x31, 0x6e, 0xcc, 0x8b, 0xa4, 0x5c, 0x99, 0x43, 0x58, 0xda, 0xa6, 0x09, 0x1f, 0x5d, 0x9f,
+	0x74, 0xf0, 0x60, 0x80, 0x09, 0x27, 0xd7, 0x60, 0x31, 0x15, 0x7f, 0x8f, 0x1e, 0xef, 0x96, 0xe4,
+	0x6a, 0x32, 0xf6, 0x22, 0x53, 0xbd, 0x02, 0x10, 0xb9, 0x3e, 0x76, 0x39, 0xfb, 0x80, 0x85, 0x68,
+	0x35, 0x8b, 0xbc, 0xcc, 0x02, 0x64, 0x19, 0xc4, 0xa2, 0x9b, 0xd0, 0x4f, 0x28, 0x54, 0x17, 0x3a,
+	0xe7, 0xb2, 0xc0, 0x6e, 0xba, 0x36, 0x0f, 0x41, 0x1f, 0xd7, 0x4d, 0xa2, 0xb4, 0x1d, 0x48, 0x36,
+	0xe1, 0xac, 0xec, 0x65, 0xaa, 0x39, 0xdf, 0xac, 0x39, 0x2b, 0xd6, 0x44, 0x33, 0xad, 0xd1, 0xb9,
+	0x4e, 0xb1, 0x99, 0xac, 0x41, 0x3d, 0xc4, 0x43, 0xde, 0x9d, 0x40, 0x3a, 0x9f, 0x85, 0x77, 0x0a,
+	0x2c, 0x73, 0x13, 0x2e, 0x3f, 0xc5, 0x23, 0xe1, 0xa2, 0xde, 0xab, 0x50, 0xcb, 0x6f, 0x2a, 0x97,
+	0x0b, 0x79, 0x28, 0xab, 0xd6, 0x3c, 0x00, 0xfd, 0x51, 0x8c, 0x2e, 0xc7, 0x89, 0xa3, 0x27, 0x68,
+	0xd5, 0x06, 0x54, 0xf2, 0xab, 0x04, 0xd3, 0xb4, 0x9a, 0xe4, 0x5e, 0x93, 0x81, 0xfe, 0x2a, 0xda,
+	0x53, 0x49, 0x4e, 0xa3, 0x3d, 0xa5, 0xe0, 0x3d, 0xd0, 0x1f, 0x63, 0x80, 0xa7, 0x10, 0x74, 0xfe,
+	0x2e, 0xc0, 0x45, 0x39, 0xca, 0x5d, 0x8c, 0x87, 0xd4, 0xc3, 0xd7, 0x0e, 0xf9, 0xa5, 0xc1, 0x85,
+	0xff, 0xc7, 0x4c, 0x9a, 0x2a, 0x10, 0x95, 0x03, 0x8d, 0x1b, 0x27, 0xd8, 0x99, 0x7b, 0xc6, 0x74,
+	0xbe, 0xfe, 0xfe, 0xf3, 0x7d, 0xee, 0x16, 0x69, 0xa5, 0xbf, 0xc8, 0x1e, 0x72, 0xf7, 0x8e, 0xfd,
+	0xb9, 0x3c, 0x91, 0x07, 0x72, 0x91, 0xd8, 0xad, 0x2f, 0x76, 0xe1, 0x97, 0x6f, 0x1a, 0x2c, 0x96,
+	0x8d, 0x40, 0xd6, 0x14, 0x7a, 0x0a, 0xa7, 0x18, 0xc7, 0xb6, 0xd2, 0x5c, 0x17, 0x28, 0x6d, 0x72,
+	0xf3, 0x08, 0xa5, 0xd4, 0xb8, 0x12, 0x49, 0x01, 0x92, 0x32, 0x91, 0x1f, 0x1a, 0xd4, 0xc7, 0xcc,
+	0x45, 0x54, 0xe5, 0xab, 0x0d, 0x38, 0x85, 0x68, 0x4b, 0x10, 0x6d, 0x98, 0x33, 0x34, 0x67, 0x4b,
+	0x1a, 0x82, 0xfc, 0x4c, 0xc1, 0xc6, 0x2c, 0xa8, 0x04, 0x53, 0xdb, 0x74, 0x0a, 0xd8, 0x7d, 0x01,
+	0x76, 0xd7, 0x98, 0xa5, 0x55, 0x23, 0xb2, 0x74, 0x7c, 0xf5, 0x31, 0xaf, 0x2a, 0xc9, 0xd4, 0x7e,
+	0x36, 0xf4, 0x62, 0x6b, 0xf1, 0xe0, 0x5a, 0x4f, 0xb2, 0x07, 0xb7, 0x18, 0x5f, 0x6b, 0x16, 0xa6,
+	0x87, 0xcf, 0x60, 0xc9, 0x63, 0xfd, 0x49, 0xf1, 0x1d, 0xed, 0xed, 0x6d, 0x9f, 0xf2, 0x77, 0x83,
+	0x9e, 0x95, 0xe6, 0xed, 0x3c, 0x9f, 0xbe, 0xff, 0x49, 0xfe, 0xcc, 0xb7, 0xbd, 0x80, 0x62, 0xc8,
+	0xdb, 0x3e, 0x2b, 0xfd, 0xa1, 0xf4, 0x2a, 0x22, 0xb7, 0xfe, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xfd,
+	0x05, 0x7b, 0x1a, 0x6c, 0x06, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/cloud/.travis.yml b/go/src/google.golang.org/cloud/.travis.yml
new file mode 100644
index 0000000..197dedc
--- /dev/null
+++ b/go/src/google.golang.org/cloud/.travis.yml
@@ -0,0 +1,11 @@
+sudo: false
+language: go
+go:
+- 1.5
+- 1.6
+install:
+- go get -v google.golang.org/cloud/...
+script:
+- openssl aes-256-cbc -K $encrypted_912ff8fa81ad_key -iv $encrypted_912ff8fa81ad_iv -in key.json.enc -out key.json -d
+- GCLOUD_TESTS_GOLANG_PROJECT_ID="dulcet-port-762" GCLOUD_TESTS_GOLANG_KEY="$(pwd)/key.json"
+  go test -v google.golang.org/cloud/...
diff --git a/go/src/google.golang.org/cloud/AUTHORS b/go/src/google.golang.org/cloud/AUTHORS
index f92e5cf..c364af1 100644
--- a/go/src/google.golang.org/cloud/AUTHORS
+++ b/go/src/google.golang.org/cloud/AUTHORS
@@ -6,6 +6,7 @@
 # Name or Organization <email address>
 # The email address is not required for organizations.
 
+Filippo Valsorda <hi@filippo.io>
 Google Inc.
 Ingo Oeser <nightlyone@googlemail.com>
 Palm Stone Games, Inc.
diff --git a/go/src/google.golang.org/cloud/CONTRIBUTING.md b/go/src/google.golang.org/cloud/CONTRIBUTING.md
index adb9ec1..135a1a1 100644
--- a/go/src/google.golang.org/cloud/CONTRIBUTING.md
+++ b/go/src/google.golang.org/cloud/CONTRIBUTING.md
@@ -7,9 +7,12 @@
        origin is https://code.googlesource.com/gocloud:
 
             git remote set-url origin https://code.googlesource.com/gocloud
+1. Make sure your auth is configured correctly by visiting
+   https://code.googlesource.com, clicking "Generate Password", and following
+   the directions.
 1. Make changes and create a change by running `git codereview change <name>`,
-provide a command message, and use `git codereview mail` to create a Gerrit CL.
-1. Keep amending to the change and mail as your recieve feedback.
+provide a commit message, and use `git codereview mail` to create a Gerrit CL.
+1. Keep amending to the change and mail as your receive feedback.
 
 ## Integration Tests
 
@@ -32,9 +35,6 @@
 From the project's root directory:
 
 ``` sh
-# Install the app component
-$ gcloud components update app
-
 # Set the default project in your env
 $ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
 
diff --git a/go/src/google.golang.org/cloud/CONTRIBUTORS b/go/src/google.golang.org/cloud/CONTRIBUTORS
index 27db791..6e1e7f1 100644
--- a/go/src/google.golang.org/cloud/CONTRIBUTORS
+++ b/go/src/google.golang.org/cloud/CONTRIBUTORS
@@ -17,6 +17,7 @@
 Dave Day <djd@golang.org>
 David Sansome <me@davidsansome.com>
 David Symonds <dsymonds@golang.org>
+Filippo Valsorda <hi@filippo.io>
 Glenn Lewis <gmlewis@google.com>
 Ingo Oeser <nightlyone@googlemail.com>
 Johan Euphrosine <proppy@google.com>
diff --git a/go/src/google.golang.org/cloud/README.google b/go/src/google.golang.org/cloud/README.google
index a1c3700..e942d2e 100644
--- a/go/src/google.golang.org/cloud/README.google
+++ b/go/src/google.golang.org/cloud/README.google
@@ -1,5 +1,5 @@
-URL: https://code.googlesource.com/gocloud/+archive/79ffda073da804f325135da5ff645a630a4d7625.tar.gz
-Version: 79ffda073da804f325135da5ff645a630a4d7625
+URL: https://code.googlesource.com/gocloud/+archive/4a23f97e60c9a14de1269e78812e59ca94033d85.tar.gz
+Version: 4a23f97e60c9a14de1269e78812e59ca94033d85
 License: New BSD
 License File: LICENSE
 
diff --git a/go/src/google.golang.org/cloud/README.md b/go/src/google.golang.org/cloud/README.md
index 85a2d86..13b7a0d 100644
--- a/go/src/google.golang.org/cloud/README.md
+++ b/go/src/google.golang.org/cloud/README.md
@@ -7,7 +7,7 @@
 import "google.golang.org/cloud"
 ```
 
-**NOTE:** These packages are experimental, and may occasionally make
+**NOTE:** These packages are under development, and may occasionally make
 backwards-incompatible changes.
 
 **NOTE:** Github repo is a mirror of [https://code.googlesource.com/gocloud](https://code.googlesource.com/gocloud).
@@ -16,8 +16,8 @@
 
 Google API                     | Status       | Package
 -------------------------------|--------------|-----------------------------------------------------------
-[Datastore][cloud-datastore]   | experimental | [`google.golang.org/cloud/datastore`][cloud-datastore-ref]
-[Cloud Storage][cloud-storage] | experimental | [`google.golang.org/cloud/storage`][cloud-storage-ref]
+[Datastore][cloud-datastore]   | beta         | [`google.golang.org/cloud/datastore`][cloud-datastore-ref]
+[Storage][cloud-storage]       | beta         | [`google.golang.org/cloud/storage`][cloud-storage-ref]
 [Pub/Sub][cloud-pubsub]        | experimental | [`google.golang.org/cloud/pubsub`][cloud-pubsub-ref]
 [BigTable][cloud-bigtable]     | stable       | [`google.golang.org/cloud/bigtable`][cloud-bigtable-ref]
 [BigQuery][cloud-bigquery]     | experimental | [`google.golang.org/cloud/bigquery`][cloud-bigquery-ref]
@@ -46,7 +46,7 @@
 Manually-configured authorization can be achieved using the
 [`golang.org/x/oauth2`](https://godoc.org/golang.org/x/oauth2) package to
 create an `oauth2.TokenSource`. This token source can be passed to the `NewClient`
-function for the relevant API using a 
+function for the relevant API using a
 [`cloud.WithTokenSource`](https://godoc.org/google.golang.org/cloud#WithTokenSource)
 option.
 
diff --git a/go/src/google.golang.org/cloud/bigquery/bigquery.go b/go/src/google.golang.org/cloud/bigquery/bigquery.go
index bc23488..9431f1b 100644
--- a/go/src/google.golang.org/cloud/bigquery/bigquery.go
+++ b/go/src/google.golang.org/cloud/bigquery/bigquery.go
@@ -18,12 +18,16 @@
 
 import (
 	"fmt"
-	"net/http"
+
+	"google.golang.org/cloud"
+	"google.golang.org/cloud/internal/transport"
 
 	"golang.org/x/net/context"
 	bq "google.golang.org/api/bigquery/v2"
 )
 
+const prodAddr = "https://www.googleapis.com/bigquery/v2/"
+
 // A Source is a source of data for the Copy function.
 type Source interface {
 	implementsSource()
@@ -50,6 +54,7 @@
 }
 
 const Scope = "https://www.googleapis.com/auth/bigquery"
+const userAgent = "gcloud-golang-bigquery/20160429"
 
 // Client may be used to perform BigQuery operations.
 type Client struct {
@@ -57,20 +62,27 @@
 	projectID string
 }
 
-// Note: many of the methods on *Client appear in the various *_op.go source files.
-
 // NewClient constructs a new Client which can perform BigQuery operations.
 // Operations performed via the client are billed to the specified GCP project.
-// The supplied http.Client is used for making requests to the BigQuery server and must be capable of
-// authenticating requests with Scope.
-func NewClient(client *http.Client, projectID string) (*Client, error) {
-	bqService, err := newBigqueryService(client)
+func NewClient(ctx context.Context, projectID string, opts ...cloud.ClientOption) (*Client, error) {
+	o := []cloud.ClientOption{
+		cloud.WithEndpoint(prodAddr),
+		cloud.WithScopes(Scope),
+		cloud.WithUserAgent(userAgent),
+	}
+	o = append(o, opts...)
+	httpClient, endpoint, err := transport.NewHTTPClient(ctx, o...)
+	if err != nil {
+		return nil, fmt.Errorf("dialing: %v", err)
+	}
+
+	s, err := newBigqueryService(httpClient, endpoint)
 	if err != nil {
 		return nil, fmt.Errorf("constructing bigquery client: %v", err)
 	}
 
 	c := &Client{
-		service:   bqService,
+		service:   s,
 		projectID: projectID,
 	}
 	return c, nil
diff --git a/go/src/google.golang.org/cloud/bigquery/create_table_test.go b/go/src/google.golang.org/cloud/bigquery/create_table_test.go
index e9a4988..109d5c9 100644
--- a/go/src/google.golang.org/cloud/bigquery/create_table_test.go
+++ b/go/src/google.golang.org/cloud/bigquery/create_table_test.go
@@ -20,6 +20,7 @@
 	"time"
 
 	"golang.org/x/net/context"
+	bq "google.golang.org/api/bigquery/v2"
 )
 
 type createTableRecorder struct {
@@ -39,7 +40,8 @@
 	}
 	exp := time.Now()
 	q := "query"
-	if _, err := c.CreateTable(context.Background(), "p", "d", "t", TableExpiration(exp), ViewQuery(q)); err != nil {
+	if _, err := c.CreateTable(context.Background(), "p", "d", "t",
+		TableExpiration(exp), ViewQuery(q)); err != nil {
 		t.Fatalf("err calling CreateTable: %v", err)
 	}
 	want := createTableConf{
@@ -52,4 +54,25 @@
 	if !reflect.DeepEqual(*s.conf, want) {
 		t.Errorf("createTableConf: got:\n%v\nwant:\n%v", *s.conf, want)
 	}
+
+	sc := Schema{fieldSchema("desc", "name", "STRING", false, true)}
+	if _, err := c.CreateTable(context.Background(), "p", "d", "t",
+		TableExpiration(exp), sc); err != nil {
+		t.Fatalf("err calling CreateTable: %v", err)
+	}
+	want = createTableConf{
+		projectID:  "p",
+		datasetID:  "d",
+		tableID:    "t",
+		expiration: exp,
+		// No need for an elaborate schema, that is tested in schema_test.go.
+		schema: &bq.TableSchema{
+			Fields: []*bq.TableFieldSchema{
+				bqTableFieldSchema("desc", "name", "STRING", "REQUIRED"),
+			},
+		},
+	}
+	if !reflect.DeepEqual(*s.conf, want) {
+		t.Errorf("createTableConf: got:\n%v\nwant:\n%v", *s.conf, want)
+	}
 }
diff --git a/go/src/google.golang.org/cloud/bigquery/error.go b/go/src/google.golang.org/cloud/bigquery/error.go
index b2e3e3f..b59ac6e 100644
--- a/go/src/google.golang.org/cloud/bigquery/error.go
+++ b/go/src/google.golang.org/cloud/bigquery/error.go
@@ -20,7 +20,7 @@
 	bq "google.golang.org/api/bigquery/v2"
 )
 
-// An Error contains detailed information about an error encountered while processing a job.
+// An Error contains detailed information about a failed bigquery operation.
 type Error struct {
 	// Mirrors bq.ErrorProto, but drops DebugInfo
 	Location, Message, Reason string
@@ -40,3 +40,43 @@
 		Reason:   ep.Reason,
 	}
 }
+
+// A MultiError contains multiple related errors.
+type MultiError []error
+
+func (m MultiError) Error() string {
+	switch len(m) {
+	case 0:
+		return "(0 errors)"
+	case 1:
+		return m[0].Error()
+	case 2:
+		return m[0].Error() + " (and 1 other error)"
+	}
+	return fmt.Sprintf("%s (and %d other errors)", m[0].Error(), len(m)-1)
+}
+
+// RowInsertionError contains all errors that occurred when attempting to insert a row.
+type RowInsertionError struct {
+	InsertID string // The InsertID associated with the affected row.
+	RowIndex int    // The 0-based index of the affected row in the batch of rows being inserted.
+	Errors   MultiError
+}
+
+func (e *RowInsertionError) Error() string {
+	errFmt := "insertion of row [insertID: %q; insertIndex: %v] failed with error: %s"
+	return fmt.Sprintf(errFmt, e.InsertID, e.RowIndex, e.Errors.Error())
+}
+
+// PutMultiError contains an error for each row which was not successfully inserted
+// into a BigQuery table.
+type PutMultiError []RowInsertionError
+
+func (pme PutMultiError) Error() string {
+	plural := "s"
+	if len(pme) == 1 {
+		plural = ""
+	}
+
+	return fmt.Sprintf("%v row insertion%s failed", len(pme), plural)
+}
diff --git a/go/src/google.golang.org/cloud/bigquery/error_test.go b/go/src/google.golang.org/cloud/bigquery/error_test.go
new file mode 100644
index 0000000..ddd9919
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigquery/error_test.go
@@ -0,0 +1,80 @@
+// Copyright 2015 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package bigquery
+
+import (
+	"errors"
+	"testing"
+)
+
+func rowInsertionError(msg string) RowInsertionError {
+	return RowInsertionError{Errors: []error{errors.New(msg)}}
+}
+
+func TestPutMultiErrorString(t *testing.T) {
+	testCases := []struct {
+		errs PutMultiError
+		want string
+	}{
+		{
+			errs: PutMultiError{},
+			want: "0 row insertions failed",
+		},
+		{
+			errs: PutMultiError{rowInsertionError("a")},
+			want: "1 row insertion failed",
+		},
+		{
+			errs: PutMultiError{rowInsertionError("a"), rowInsertionError("b")},
+			want: "2 row insertions failed",
+		},
+	}
+
+	for _, tc := range testCases {
+		if tc.errs.Error() != tc.want {
+			t.Errorf("PutMultiError string: got:\n%v\nwant:\n%v", tc.errs.Error(), tc.want)
+		}
+	}
+}
+
+func TestMultiErrorString(t *testing.T) {
+	testCases := []struct {
+		errs MultiError
+		want string
+	}{
+		{
+			errs: MultiError{},
+			want: "(0 errors)",
+		},
+		{
+			errs: MultiError{errors.New("a")},
+			want: "a",
+		},
+		{
+			errs: MultiError{errors.New("a"), errors.New("b")},
+			want: "a (and 1 other error)",
+		},
+		{
+			errs: MultiError{errors.New("a"), errors.New("b"), errors.New("c")},
+			want: "a (and 2 other errors)",
+		},
+	}
+
+	for _, tc := range testCases {
+		if tc.errs.Error() != tc.want {
+			t.Errorf("PutMultiError string: got:\n%v\nwant:\n%v", tc.errs.Error(), tc.want)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/bigquery/schema.go b/go/src/google.golang.org/cloud/bigquery/schema.go
index 51888b9..5f3095e 100644
--- a/go/src/google.golang.org/cloud/bigquery/schema.go
+++ b/go/src/google.golang.org/cloud/bigquery/schema.go
@@ -74,6 +74,11 @@
 	return &bq.TableSchema{Fields: fields}
 }
 
+// customizeCreateTable allows a Schema to be used directly as an option to CreateTable.
+func (s Schema) customizeCreateTable(conf *createTableConf) {
+	conf.schema = s.asTableSchema()
+}
+
 func convertTableFieldSchema(tfs *bq.TableFieldSchema) *FieldSchema {
 	fs := &FieldSchema{
 		Description: tfs.Description,
diff --git a/go/src/google.golang.org/cloud/bigquery/service.go b/go/src/google.golang.org/cloud/bigquery/service.go
index b57f84e..ee8931e 100644
--- a/go/src/google.golang.org/cloud/bigquery/service.go
+++ b/go/src/google.golang.org/cloud/bigquery/service.go
@@ -35,32 +35,35 @@
 	getJobType(ctx context.Context, projectId, jobID string) (jobType, error)
 	jobStatus(ctx context.Context, projectId, jobID string) (*JobStatus, error)
 
-	// Queries
-
-	// readQuery reads data resulting from a query job. If the job is not
-	// yet complete, an errIncompleteJob is returned. readQuery may be
-	// called repeatedly to wait for results indefinitely.
-	readQuery(ctx context.Context, conf *readQueryConf, pageToken string) (*readDataResult, error)
-
-	readTabledata(ctx context.Context, conf *readTableConf, pageToken string) (*readDataResult, error)
-
 	// Tables
 	createTable(ctx context.Context, conf *createTableConf) error
 	getTableMetadata(ctx context.Context, projectID, datasetID, tableID string) (*TableMetadata, error)
 	deleteTable(ctx context.Context, projectID, datasetID, tableID string) error
 	listTables(ctx context.Context, projectID, datasetID, pageToken string) ([]*Table, string, error)
 	patchTable(ctx context.Context, projectID, datasetID, tableID string, conf *patchTableConf) (*TableMetadata, error)
+
+	// Table data
+	readTabledata(ctx context.Context, conf *readTableConf, pageToken string) (*readDataResult, error)
+	insertRows(ctx context.Context, projectID, datasetID, tableID string, rows []*insertionRow) error
+
+	// Misc
+
+	// readQuery reads data resulting from a query job. If the job is
+	// incomplete, an errIncompleteJob is returned. readQuery may be called
+	// repeatedly to poll for job completion.
+	readQuery(ctx context.Context, conf *readQueryConf, pageToken string) (*readDataResult, error)
 }
 
 type bigqueryService struct {
 	s *bq.Service
 }
 
-func newBigqueryService(client *http.Client) (*bigqueryService, error) {
+func newBigqueryService(client *http.Client, endpoint string) (*bigqueryService, error) {
 	s, err := bq.New(client)
 	if err != nil {
 		return nil, fmt.Errorf("constructing bigquery client: %v", err)
 	}
+	s.BasePath = endpoint
 
 	return &bigqueryService{s: s}, nil
 }
@@ -210,6 +213,43 @@
 	return result, nil
 }
 
+func (s *bigqueryService) insertRows(ctx context.Context, projectID, datasetID, tableID string, rows []*insertionRow) error {
+	conf := &bq.TableDataInsertAllRequest{}
+	for _, row := range rows {
+		m := make(map[string]bq.JsonValue)
+		for k, v := range row.Row {
+			m[k] = bq.JsonValue(v)
+		}
+		conf.Rows = append(conf.Rows, &bq.TableDataInsertAllRequestRows{
+			InsertId: row.InsertID,
+			Json:     m,
+		})
+	}
+	res, err := s.s.Tabledata.InsertAll(projectID, datasetID, tableID, conf).Context(ctx).Do()
+	if err != nil {
+		return err
+	}
+	if len(res.InsertErrors) == 0 {
+		return nil
+	}
+
+	var errs PutMultiError
+	for _, e := range res.InsertErrors {
+		if int(e.Index) > len(rows) {
+			return fmt.Errorf("internal error: unexpected row index: %v", e.Index)
+		}
+		rie := RowInsertionError{
+			InsertID: rows[e.Index].InsertID,
+			RowIndex: int(e.Index),
+		}
+		for _, errp := range e.Errors {
+			rie.Errors = append(rie.Errors, errorFromErrorProto(errp))
+		}
+		errs = append(errs, rie)
+	}
+	return errs
+}
+
 type jobType int
 
 const (
@@ -296,6 +336,7 @@
 	projectID, datasetID, tableID string
 	expiration                    time.Time
 	viewQuery                     string
+	schema                        *bq.TableSchema
 }
 
 // createTable creates a table in the BigQuery service.
@@ -314,11 +355,15 @@
 	if !conf.expiration.IsZero() {
 		table.ExpirationTime = conf.expiration.UnixNano() / 1000
 	}
+	// TODO(jba): make it impossible to provide both a view query and a schema.
 	if conf.viewQuery != "" {
 		table.View = &bq.ViewDefinition{
 			Query: conf.viewQuery,
 		}
 	}
+	if conf.schema != nil {
+		table.Schema = conf.schema
+	}
 
 	_, err := s.s.Tables.Insert(conf.projectID, conf.datasetID, table).Context(ctx).Do()
 	return err
diff --git a/go/src/google.golang.org/cloud/bigquery/table.go b/go/src/google.golang.org/cloud/bigquery/table.go
index 5ac5487..e3e09a2 100644
--- a/go/src/google.golang.org/cloud/bigquery/table.go
+++ b/go/src/google.golang.org/cloud/bigquery/table.go
@@ -225,7 +225,7 @@
 
 type tableExpiration time.Time
 
-// TableExpiration returns a CreateTableOption which will cause the created table to be deleted after the expiration time.
+// TableExpiration returns a CreateTableOption that will cause the created table to be deleted after the expiration time.
 func TableExpiration(exp time.Time) CreateTableOption { return tableExpiration(exp) }
 
 func (opt tableExpiration) customizeCreateTable(conf *createTableConf) {
@@ -276,3 +276,8 @@
 func (p *TableMetadataPatch) Apply(ctx context.Context) (*TableMetadata, error) {
 	return p.s.patchTable(ctx, p.projectID, p.datasetID, p.tableID, &p.conf)
 }
+
+// NewUploader returns an *Uploader that can be used to append rows to t.
+func (t *Table) NewUploader() *Uploader {
+	return &Uploader{t: t}
+}
diff --git a/go/src/google.golang.org/cloud/bigquery/uploader.go b/go/src/google.golang.org/cloud/bigquery/uploader.go
new file mode 100644
index 0000000..e5825ce
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigquery/uploader.go
@@ -0,0 +1,77 @@
+// Copyright 2015 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package bigquery
+
+import (
+	"fmt"
+	"reflect"
+
+	"golang.org/x/net/context"
+)
+
+// An Uploader does streaming inserts into a BigQuery table.
+// It is safe for concurrent use.
+type Uploader struct {
+	t *Table
+}
+
+// Put uploads one or more rows to the BigQuery service.  src must implement ValueSaver or be a slice of ValueSavers.
+// Put returns a PutMultiError if one or more rows failed to be uploaded.
+// The PutMultiError contains a RowInsertionError for each failed row.
+func (u *Uploader) Put(ctx context.Context, src interface{}) error {
+	// TODO(mcgreevy): Support structs which do not implement ValueSaver as src, a la Datastore.
+	// TODO(mcgreevy): Support options [SkipInvalidRows,IgnoreUnknownValues]
+
+	if saver, ok := src.(ValueSaver); ok {
+		return u.putMulti(ctx, []ValueSaver{saver})
+	}
+
+	srcVal := reflect.ValueOf(src)
+	if srcVal.Kind() != reflect.Slice {
+		return fmt.Errorf("%T is not a ValueSaver or slice of ValueSavers", src)
+	}
+
+	var savers []ValueSaver
+	for i := 0; i < srcVal.Len(); i++ {
+		s := srcVal.Index(i).Interface()
+		saver, ok := s.(ValueSaver)
+		if !ok {
+			return fmt.Errorf("element %d of src is of type %T, which is not a ValueSaver", i, s)
+		}
+		savers = append(savers, saver)
+	}
+	return u.putMulti(ctx, savers)
+}
+
+func (u *Uploader) putMulti(ctx context.Context, src []ValueSaver) error {
+	var rows []*insertionRow
+	for _, saver := range src {
+		row, insertID, err := saver.Save()
+		if err != nil {
+			return err
+		}
+		rows = append(rows, &insertionRow{InsertID: insertID, Row: row})
+	}
+	return u.t.service.insertRows(ctx, u.t.ProjectID, u.t.DatasetID, u.t.TableID, rows)
+}
+
+// An insertionRow represents a row of data to be inserted into a table.
+type insertionRow struct {
+	// If InsertID is non-empty, BigQuery will use it to de-duplicate insertions of
+	// this row on a best-effort basis.
+	InsertID string
+	// The data to be inserted, represented as a map from field name to Value.
+	Row map[string]Value
+}
diff --git a/go/src/google.golang.org/cloud/bigquery/uploader_test.go b/go/src/google.golang.org/cloud/bigquery/uploader_test.go
new file mode 100644
index 0000000..6977c55
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigquery/uploader_test.go
@@ -0,0 +1,149 @@
+// Copyright 2015 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package bigquery
+
+import (
+	"reflect"
+	"testing"
+
+	"golang.org/x/net/context"
+)
+
+type testSaver struct {
+	ir  *insertionRow
+	err error
+}
+
+func (ts testSaver) Save() (map[string]Value, string, error) {
+	return ts.ir.Row, ts.ir.InsertID, ts.err
+}
+
+func TestRejectsNonValueSavers(t *testing.T) {
+	u := Uploader{defaultTable}
+
+	testCases := []struct {
+		src interface{}
+	}{
+		{
+			src: 1,
+		},
+		{
+			src: []int{1, 2},
+		},
+		{
+			src: []interface{}{
+				testSaver{ir: &insertionRow{"a", map[string]Value{"one": 1}}},
+				1,
+			},
+		},
+	}
+
+	for _, tc := range testCases {
+		if err := u.Put(context.Background(), tc.src); err == nil {
+			t.Errorf("put value: %v; got err: %v; want nil", tc.src, err)
+		}
+	}
+}
+
+type insertRowsRecorder struct {
+	rowBatches [][]*insertionRow
+	service
+}
+
+func (irr *insertRowsRecorder) insertRows(ctx context.Context, projectID, datasetID, tableID string, rows []*insertionRow) error {
+	irr.rowBatches = append(irr.rowBatches, rows)
+	return nil
+}
+
+func TestInsertsData(t *testing.T) {
+	table := &Table{
+		ProjectID: "project-id",
+		DatasetID: "dataset-id",
+		TableID:   "table-id",
+	}
+
+	testCases := []struct {
+		data [][]*insertionRow
+	}{
+		{
+			data: [][]*insertionRow{
+				{
+					&insertionRow{"a", map[string]Value{"one": 1}},
+				},
+			},
+		},
+		{
+
+			data: [][]*insertionRow{
+				{
+					&insertionRow{"a", map[string]Value{"one": 1}},
+					&insertionRow{"b", map[string]Value{"two": 2}},
+				},
+			},
+		},
+		{
+
+			data: [][]*insertionRow{
+				{
+					&insertionRow{"a", map[string]Value{"one": 1}},
+				},
+				{
+					&insertionRow{"b", map[string]Value{"two": 2}},
+				},
+			},
+		},
+		{
+
+			data: [][]*insertionRow{
+				{
+					&insertionRow{"a", map[string]Value{"one": 1}},
+					&insertionRow{"b", map[string]Value{"two": 2}},
+				},
+				{
+					&insertionRow{"c", map[string]Value{"three": 3}},
+					&insertionRow{"d", map[string]Value{"four": 4}},
+				},
+			},
+		},
+	}
+	for _, tc := range testCases {
+		irr := &insertRowsRecorder{}
+		table.service = irr
+		u := Uploader{table}
+		for _, batch := range tc.data {
+			if len(batch) == 0 {
+				continue
+			}
+			var toUpload interface{}
+			if len(batch) == 1 {
+				toUpload = testSaver{ir: batch[0]}
+			} else {
+				savers := []testSaver{}
+				for _, row := range batch {
+					savers = append(savers, testSaver{ir: row})
+				}
+				toUpload = savers
+			}
+
+			err := u.Put(context.Background(), toUpload)
+			if err != nil {
+				t.Errorf("expected successful Put of ValueSaver; got: %v")
+			}
+		}
+		if got, want := irr.rowBatches, tc.data; !reflect.DeepEqual(got, want) {
+			t.Errorf("got: %v, want: %v", got, want)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/bigquery/value.go b/go/src/google.golang.org/cloud/bigquery/value.go
index 369bcd4..2433ad9 100644
--- a/go/src/google.golang.org/cloud/bigquery/value.go
+++ b/go/src/google.golang.org/cloud/bigquery/value.go
@@ -41,6 +41,56 @@
 	return nil
 }
 
+// A ValueSaver returns a row of data to be inserted into a table.
+type ValueSaver interface {
+	// Save returns a row to be inserted into a BigQuery table, represented
+	// as a map from field name to Value.
+	// If insertID is non-empty, BigQuery will use it to de-duplicate
+	// insertions of this row on a best-effort basis.
+	Save() (row map[string]Value, insertID string, err error)
+}
+
+// ValuesSaver implements ValueSaver for a slice of Values.
+type ValuesSaver struct {
+	Schema Schema
+
+	// If non-empty, BigQuery will use InsertID to de-duplicate insertions
+	// of this row on a best-effort basis.
+	InsertID string
+
+	Row []Value
+}
+
+// Save implements ValueSaver
+func (vls *ValuesSaver) Save() (map[string]Value, string, error) {
+	m, err := valuesToMap(vls.Row, vls.Schema)
+	return m, vls.InsertID, err
+}
+
+func valuesToMap(vs []Value, schema Schema) (map[string]Value, error) {
+	if len(vs) != len(schema) {
+		return nil, errors.New("Schema does not match length of row to be inserted")
+	}
+
+	m := make(map[string]Value)
+	for i, fieldSchema := range schema {
+		if fieldSchema.Type == RecordFieldType {
+			nested, ok := vs[i].([]Value)
+			if !ok {
+				return nil, errors.New("Nested record is not a []Value")
+			}
+			value, err := valuesToMap(nested, fieldSchema.Schema)
+			if err != nil {
+				return nil, err
+			}
+			m[fieldSchema.Name] = value
+		} else {
+			m[fieldSchema.Name] = vs[i]
+		}
+	}
+	return m, nil
+}
+
 // convertRows converts a series of TableRows into a series of Value slices.
 // schema is used to interpret the data from rows; its length must match the
 // length of each row.
diff --git a/go/src/google.golang.org/cloud/bigquery/value_test.go b/go/src/google.golang.org/cloud/bigquery/value_test.go
index fbd8089..9d21ac7 100644
--- a/go/src/google.golang.org/cloud/bigquery/value_test.go
+++ b/go/src/google.golang.org/cloud/bigquery/value_test.go
@@ -323,3 +323,60 @@
 		t.Errorf("converting repeated records containing record : got:\n%v\nwant:\n%v", got, want)
 	}
 }
+
+func TestValuesSaverConvertsToMap(t *testing.T) {
+	testCases := []struct {
+		vs   ValuesSaver
+		want *insertionRow
+	}{
+		{
+			vs: ValuesSaver{
+				Schema: []*FieldSchema{
+					{Name: "intField", Type: IntegerFieldType},
+					{Name: "strField", Type: StringFieldType},
+				},
+				InsertID: "iid",
+				Row:      []Value{1, "a"},
+			},
+			want: &insertionRow{
+				InsertID: "iid",
+				Row:      map[string]Value{"intField": 1, "strField": "a"},
+			},
+		},
+		{
+			vs: ValuesSaver{
+				Schema: []*FieldSchema{
+					{Name: "intField", Type: IntegerFieldType},
+					{
+						Name: "recordField",
+						Type: RecordFieldType,
+						Schema: []*FieldSchema{
+							{Name: "nestedInt", Type: IntegerFieldType, Repeated: true},
+						},
+					},
+				},
+				InsertID: "iid",
+				Row:      []Value{1, []Value{[]Value{2, 3}}},
+			},
+			want: &insertionRow{
+				InsertID: "iid",
+				Row: map[string]Value{
+					"intField": 1,
+					"recordField": map[string]Value{
+						"nestedInt": []Value{2, 3},
+					},
+				},
+			},
+		},
+	}
+	for _, tc := range testCases {
+		data, insertID, err := tc.vs.Save()
+		if err != nil {
+			t.Errorf("Expected successful save; got: %v", err)
+		}
+		got := &insertionRow{insertID, data}
+		if !reflect.DeepEqual(got, tc.want) {
+			t.Errorf("saving ValuesSaver: got:\n%v\nwant:\n%v", got, tc.want)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/bigtable/admin_test.go b/go/src/google.golang.org/cloud/bigtable/admin_test.go
index 191afc2..b33b95a 100644
--- a/go/src/google.golang.org/cloud/bigtable/admin_test.go
+++ b/go/src/google.golang.org/cloud/bigtable/admin_test.go
@@ -13,7 +13,7 @@
 )
 
 func TestAdminIntegration(t *testing.T) {
-	srv, err := bttest.NewServer()
+	srv, err := bttest.NewServer("127.0.0.1:0")
 	if err != nil {
 		t.Fatal(err)
 	}
diff --git a/go/src/google.golang.org/cloud/bigtable/bigtable.go b/go/src/google.golang.org/cloud/bigtable/bigtable.go
index 16e24ca..f7f50f1 100644
--- a/go/src/google.golang.org/cloud/bigtable/bigtable.go
+++ b/go/src/google.golang.org/cloud/bigtable/bigtable.go
@@ -29,6 +29,7 @@
 	btspb "google.golang.org/cloud/bigtable/internal/service_proto"
 	"google.golang.org/cloud/internal/transport"
 	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
 )
 
 const prodAddr = "bigtable.googleapis.com:443"
@@ -463,6 +464,59 @@
 	m.ops = append(m.ops, &btdpb.Mutation{Mutation: &btdpb.Mutation_DeleteFromRow_{&btdpb.Mutation_DeleteFromRow{}}})
 }
 
+// ApplyBulk applies multiple Mutations.
+// Each mutation is individually applied atomically,
+// but the set of mutations may be applied in any order.
+//
+// Two types of failures may occur. If the entire process
+// fails, (nil, err) will be returned. If specific mutations
+// fail to apply, ([]err, nil) will be returned, and the errors
+// will correspond to the relevant rowKeys/muts arguments.
+//
+// Depending on how the mutations are batched at the server one mutation may fail due to a problem
+// with another mutation. In this case the same error will be reported for both mutations.
+//
+// Conditional mutations cannot be applied in bulk and providing one will result in an error.
+func (t *Table) ApplyBulk(ctx context.Context, rowKeys []string, muts []*Mutation, opts ...ApplyOption) ([]error, error) {
+	if len(rowKeys) != len(muts) {
+		return nil, fmt.Errorf("mismatched rowKeys and mutation array lengths: %d, %d", len(rowKeys), len(muts))
+	}
+
+	after := func(res proto.Message) {
+		for _, o := range opts {
+			o.after(res)
+		}
+	}
+
+	req := &btspb.MutateRowsRequest{
+		TableName: t.c.fullTableName(t.table),
+		Entries:   make([]*btspb.MutateRowsRequest_Entry, len(rowKeys)),
+	}
+	for i, key := range rowKeys {
+		mut := muts[i]
+		if mut.cond != nil {
+			return nil, fmt.Errorf("conditional mutations cannot be applied in bulk")
+		}
+		req.Entries[i] = &btspb.MutateRowsRequest_Entry{RowKey: []byte(key), Mutations: mut.ops}
+	}
+	res, err := t.c.client.MutateRows(ctx, req)
+	if err != nil {
+		return nil, err
+	}
+	var errors []error // kept as nil if everything is OK
+	for i, status := range res.Statuses {
+		if status.Code == int32(codes.OK) {
+			continue
+		}
+		if errors == nil {
+			errors = make([]error, len(res.Statuses))
+		}
+		errors[i] = grpc.Errorf(codes.Code(status.Code), status.Message)
+	}
+	after(res)
+	return errors, nil
+}
+
 // Timestamp is in units of microseconds since 1 January 1970.
 type Timestamp int64
 
diff --git a/go/src/google.golang.org/cloud/bigtable/bigtable_test.go b/go/src/google.golang.org/cloud/bigtable/bigtable_test.go
index 8bce893..47ea699 100644
--- a/go/src/google.golang.org/cloud/bigtable/bigtable_test.go
+++ b/go/src/google.golang.org/cloud/bigtable/bigtable_test.go
@@ -180,7 +180,7 @@
 	var clientOpts []cloud.ClientOption
 	timeout := 10 * time.Second
 	if *useProd == "" {
-		srv, err := bttest.NewServer()
+		srv, err := bttest.NewServer("127.0.0.1:0")
 		if err != nil {
 			t.Fatal(err)
 		}
@@ -582,6 +582,70 @@
 		t.Errorf("Large scan returned %d bytes, want %d", n, want)
 	}
 	checkpoint("tested big read/write/scan")
+
+	// Test bulk mutations
+	if err := adminClient.CreateColumnFamily(ctx, table, "bulk"); err != nil {
+		t.Fatalf("Creating column family: %v", err)
+	}
+	bulkData := map[string][]string{
+		"red sox":  []string{"2004", "2007", "2013"},
+		"patriots": []string{"2001", "2003", "2004", "2014"},
+		"celtics":  []string{"1981", "1984", "1986", "2008"},
+	}
+	var rowKeys []string
+	var muts []*Mutation
+	for row, ss := range bulkData {
+		mut := NewMutation()
+		for _, name := range ss {
+			mut.Set("bulk", name, 0, []byte("1"))
+		}
+		rowKeys = append(rowKeys, row)
+		muts = append(muts, mut)
+	}
+	status, err := tbl.ApplyBulk(ctx, rowKeys, muts)
+	if err != nil {
+		t.Fatalf("Bulk mutating rows %q: %v", rowKeys, err)
+	}
+	if status != nil {
+		t.Errorf("non-nil errors: %v", err)
+	}
+	checkpoint("inserted bulk data")
+
+	// Read each row back
+	for rowKey, ss := range bulkData {
+		row, err := tbl.ReadRow(ctx, rowKey)
+		if err != nil {
+			t.Fatalf("Reading a bulk row: %v", err)
+		}
+		for _, ris := range row {
+			sort.Sort(byColumn(ris))
+		}
+		var wantItems []ReadItem
+		for _, val := range ss {
+			wantItems = append(wantItems, ReadItem{Row: rowKey, Column: "bulk:" + val, Value: []byte("1")})
+		}
+		wantRow := Row{"bulk": wantItems}
+		if !reflect.DeepEqual(row, wantRow) {
+			t.Errorf("Read row mismatch.\n got %#v\nwant %#v", row, wantRow)
+		}
+	}
+	checkpoint("tested reading from bulk insert")
+
+	// Test bulk write errors
+	badMut := NewMutation()
+	badMut.Set("badfamily", "col", -1, nil)
+	badMut2 := NewMutation()
+	badMut2.Set("badfamily2", "goodcol", -1, []byte("1"))
+	status, err = tbl.ApplyBulk(ctx, []string{"badrow", "badrow2"}, []*Mutation{badMut, badMut2})
+	if err != nil {
+		t.Fatalf("Bulk mutating rows %q: %v", rowKeys, err)
+	}
+	if status == nil {
+		t.Errorf("No errors for bad bulk mutation")
+	}
+	if status[0] == nil || status[1] == nil {
+		t.Errorf("No error for bad bulk mutation")
+	}
 }
 
 func fill(b, sub []byte) {
diff --git a/go/src/google.golang.org/cloud/bigtable/bttest/inmem.go b/go/src/google.golang.org/cloud/bigtable/bttest/inmem.go
index 15ac437..a81a200 100644
--- a/go/src/google.golang.org/cloud/bigtable/bttest/inmem.go
+++ b/go/src/google.golang.org/cloud/bigtable/bttest/inmem.go
@@ -19,7 +19,7 @@
 
 To use a Server, create it, and then connect to it with no security:
 (The project/zone/cluster values are ignored.)
-	srv, err := bttest.NewServer()
+	srv, err := bttest.NewServer("127.0.0.1:0")
 	...
 	conn, err := grpc.Dial(srv.Addr, grpc.WithInsecure())
 	...
@@ -41,13 +41,15 @@
 	"sync"
 	"time"
 
+	emptypb "github.com/golang/protobuf/ptypes/empty"
 	"golang.org/x/net/context"
 	btdpb "google.golang.org/cloud/bigtable/internal/data_proto"
-	emptypb "google.golang.org/cloud/bigtable/internal/empty"
+	rpcpb "google.golang.org/cloud/bigtable/internal/rpc_status_proto"
 	btspb "google.golang.org/cloud/bigtable/internal/service_proto"
 	bttdpb "google.golang.org/cloud/bigtable/internal/table_data_proto"
 	bttspb "google.golang.org/cloud/bigtable/internal/table_service_proto"
 	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
 )
 
 // Server is an in-memory Cloud Bigtable fake.
@@ -73,10 +75,11 @@
 	btspb.BigtableServiceServer
 }
 
-// NewServer creates a new Server. The Server will be listening for gRPC connections
-// at the address named by the Addr field, without TLS.
-func NewServer() (*Server, error) {
-	l, err := net.Listen("tcp", "127.0.0.1:0")
+// NewServer creates a new Server.
+// The Server will be listening for gRPC connections, without TLS,
+// on the provided address. The resolved address is named by the Addr field.
+func NewServer(laddr string) (*Server, error) {
+	l, err := net.Listen("tcp", laddr)
 	if err != nil {
 		return nil, err
 	}
@@ -150,7 +153,7 @@
 
 	return &bttdpb.Table{
 		Name:           tbl,
-		ColumnFamilies: toColumnFamilies(tblIns.families),
+		ColumnFamilies: toColumnFamilies(tblIns.columnFamilies()),
 	}, nil
 }
 
@@ -408,16 +411,44 @@
 		return nil, fmt.Errorf("no such table %q", req.TableName)
 	}
 
+	f := tbl.columnFamiliesSet()
 	r := tbl.mutableRow(string(req.RowKey))
 	r.mu.Lock()
 	defer r.mu.Unlock()
 
-	if err := applyMutations(tbl, r, req.Mutations); err != nil {
+	if err := applyMutations(tbl, r, req.Mutations, f); err != nil {
 		return nil, err
 	}
 	return &emptypb.Empty{}, nil
 }
 
+func (s *server) MutateRows(ctx context.Context, req *btspb.MutateRowsRequest) (*btspb.MutateRowsResponse, error) {
+	s.mu.Lock()
+	tbl, ok := s.tables[req.TableName]
+	s.mu.Unlock()
+	if !ok {
+		return nil, fmt.Errorf("no such table %q", req.TableName)
+	}
+
+	res := &btspb.MutateRowsResponse{Statuses: make([]*rpcpb.Status, len(req.Entries))}
+
+	f := tbl.columnFamiliesSet()
+
+	for i, entry := range req.Entries {
+		r := tbl.mutableRow(string(entry.RowKey))
+		r.mu.Lock()
+		if err := applyMutations(tbl, r, entry.Mutations, f); err != nil {
+			// We can't easily reconstruct the proper code after an error
+			res.Statuses[i] = &rpcpb.Status{Code: int32(codes.Internal), Message: err.Error()}
+		} else {
+			res.Statuses[i] = &rpcpb.Status{Code: int32(codes.OK)}
+		}
+		r.mu.Unlock()
+	}
+
+	return res, nil
+}
+
 func (s *server) CheckAndMutateRow(ctx context.Context, req *btspb.CheckAndMutateRowRequest) (*btspb.CheckAndMutateRowResponse, error) {
 	s.mu.Lock()
 	tbl, ok := s.tables[req.TableName]
@@ -428,6 +459,8 @@
 
 	res := &btspb.CheckAndMutateRowResponse{}
 
+	f := tbl.columnFamiliesSet()
+
 	r := tbl.mutableRow(string(req.RowKey))
 	r.mu.Lock()
 	defer r.mu.Unlock()
@@ -457,52 +490,36 @@
 		muts = req.TrueMutations
 	}
 
-	if err := applyMutations(tbl, r, muts); err != nil {
+	if err := applyMutations(tbl, r, muts, f); err != nil {
 		return nil, err
 	}
 	return res, nil
 }
 
 // applyMutations applies a sequence of mutations to a row.
+// fam should be a snapshot of the keys of tbl.families.
 // It assumes r.mu is locked.
-func applyMutations(tbl *table, r *row, muts []*btdpb.Mutation) error {
+func applyMutations(tbl *table, r *row, muts []*btdpb.Mutation, fam map[string]bool) error {
 	for _, mut := range muts {
 		switch mut := mut.Mutation.(type) {
 		default:
 			return fmt.Errorf("can't handle mutation type %T", mut)
 		case *btdpb.Mutation_SetCell_:
 			set := mut.SetCell
-			tbl.mu.RLock()
-			_, famOK := tbl.families[set.FamilyName]
-			tbl.mu.RUnlock()
-			if !famOK {
+			if !fam[set.FamilyName] {
 				return fmt.Errorf("unknown family %q", set.FamilyName)
 			}
 			ts := set.TimestampMicros
 			if ts == -1 { // bigtable.ServerTime
-				ts = time.Now().UnixNano() / 1e3
-				ts -= ts % 1000 // round to millisecond granularity
+				ts = newTimestamp()
 			}
 			if !tbl.validTimestamp(ts) {
 				return fmt.Errorf("invalid timestamp %d", ts)
 			}
 			col := fmt.Sprintf("%s:%s", set.FamilyName, set.ColumnQualifier)
 
-			cs := r.cells[col]
 			newCell := cell{ts: ts, value: set.Value}
-			replaced := false
-			for i, cell := range cs {
-				if cell.ts == newCell.ts {
-					cs[i] = newCell
-					replaced = true
-					break
-				}
-			}
-			if !replaced {
-				cs = append(cs, newCell)
-			}
-			sort.Sort(byDescTS(cs))
-			r.cells[col] = cs
+			r.cells[col] = appendOrReplaceCell(r.cells[col], newCell)
 		case *btdpb.Mutation_DeleteFromColumn_:
 			del := mut.DeleteFromColumn
 			col := fmt.Sprintf("%s:%s", del.FamilyName, del.ColumnQualifier)
@@ -545,6 +562,35 @@
 	return nil
 }
 
+func maxTimestamp(x, y int64) int64 {
+	if x > y {
+		return x
+	}
+	return y
+}
+
+func newTimestamp() int64 {
+	ts := time.Now().UnixNano() / 1e3
+	ts -= ts % 1000 // round to millisecond granularity
+	return ts
+}
+
+func appendOrReplaceCell(cs []cell, newCell cell) []cell {
+	replaced := false
+	for i, cell := range cs {
+		if cell.ts == newCell.ts {
+			cs[i] = newCell
+			replaced = true
+			break
+		}
+	}
+	if !replaced {
+		cs = append(cs, newCell)
+	}
+	sort.Sort(byDescTS(cs))
+	return cs
+}
+
 func (s *server) ReadModifyWriteRow(ctx context.Context, req *btspb.ReadModifyWriteRowRequest) (*btdpb.Row, error) {
 	s.mu.Lock()
 	tbl, ok := s.tables[req.TableName]
@@ -570,34 +616,40 @@
 
 		key := fmt.Sprintf("%s:%s", rule.FamilyName, rule.ColumnQualifier)
 
-		newCell := false
-		if len(r.cells[key]) == 0 {
-			r.cells[key] = []cell{{
-			// TODO(dsymonds): should this set a timestamp?
-			}}
-			newCell = true
+		cells := r.cells[key]
+		ts := newTimestamp()
+		var newCell, prevCell cell
+		isEmpty := len(cells) == 0
+		if !isEmpty {
+			prevCell = cells[0]
+
+			// ts is the max of now or the prev cell's timestamp in case the
+			// prev cell is in the future
+			ts = maxTimestamp(ts, prevCell.ts)
 		}
-		cell := &r.cells[key][0]
 
 		switch rule := rule.Rule.(type) {
 		default:
 			return nil, fmt.Errorf("unknown RMW rule oneof %T", rule)
 		case *btdpb.ReadModifyWriteRule_AppendValue:
-			cell.value = append(cell.value, rule.AppendValue...)
+			newCell = cell{ts: ts, value: append(prevCell.value, rule.AppendValue...)}
 		case *btdpb.ReadModifyWriteRule_IncrementAmount:
 			var v int64
-			if !newCell {
-				if len(cell.value) != 8 {
+			if !isEmpty {
+				prevVal := prevCell.value
+				if len(prevVal) != 8 {
 					return nil, fmt.Errorf("increment on non-64-bit value")
 				}
-				v = int64(binary.BigEndian.Uint64(cell.value))
+				v = int64(binary.BigEndian.Uint64(prevVal))
 			}
+
 			v += rule.IncrementAmount
 			var val [8]byte
 			binary.BigEndian.PutUint64(val[:], uint64(v))
-			cell.value = val[:]
+			newCell = cell{ts: ts, value: val[:]}
 		}
-		updates[key] = *cell
+		updates[key] = newCell
+		r.cells[key] = appendOrReplaceCell(r.cells[key], newCell)
 	}
 
 	res := &btdpb.Row{
@@ -684,6 +736,24 @@
 	return ts%1000 == 0
 }
 
+func (t *table) columnFamilies() map[string]*columnFamily {
+	cp := make(map[string]*columnFamily)
+	t.mu.RLock()
+	for fam, cf := range t.families {
+		cp[fam] = cf
+	}
+	t.mu.RUnlock()
+	return cp
+}
+
+func (t *table) columnFamiliesSet() map[string]bool {
+	f := make(map[string]bool)
+	for fam := range t.columnFamilies() {
+		f[fam] = true
+	}
+	return f
+}
+
 func (t *table) mutableRow(row string) *row {
 	// Try fast path first.
 	t.mu.RLock()
diff --git a/go/src/google.golang.org/cloud/bigtable/bttest/inmem_test.go b/go/src/google.golang.org/cloud/bigtable/bttest/inmem_test.go
new file mode 100644
index 0000000..aee1f6a
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigtable/bttest/inmem_test.go
@@ -0,0 +1,85 @@
+package bttest
+
+import (
+	"fmt"
+	"math/rand"
+	"sync"
+	"sync/atomic"
+	"testing"
+	"time"
+
+	"golang.org/x/net/context"
+	btdpb "google.golang.org/cloud/bigtable/internal/data_proto"
+	btspb "google.golang.org/cloud/bigtable/internal/service_proto"
+	bttdpb "google.golang.org/cloud/bigtable/internal/table_data_proto"
+	bttspb "google.golang.org/cloud/bigtable/internal/table_service_proto"
+)
+
+func TestConcurrentMutationsAndGC(t *testing.T) {
+	s := &server{
+		tables: make(map[string]*table),
+	}
+	ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
+	defer cancel()
+	if _, err := s.CreateTable(
+		ctx,
+		&bttspb.CreateTableRequest{Name: "cluster", TableId: "t"}); err != nil {
+		t.Fatal(err)
+	}
+	const name = `cluster/tables/t`
+	tbl := s.tables[name]
+	req := &bttspb.CreateColumnFamilyRequest{Name: name, ColumnFamilyId: "cf"}
+	fam, err := s.CreateColumnFamily(ctx, req)
+	if err != nil {
+		t.Fatal(err)
+	}
+	fam.GcRule = &bttdpb.GcRule{Rule: &bttdpb.GcRule_MaxNumVersions{MaxNumVersions: 1}}
+	if _, err := s.UpdateColumnFamily(ctx, fam); err != nil {
+		t.Fatal(err)
+	}
+
+	var wg sync.WaitGroup
+	var ts int64
+	ms := func() []*btdpb.Mutation {
+		return []*btdpb.Mutation{
+			{
+				Mutation: &btdpb.Mutation_SetCell_{
+					SetCell: &btdpb.Mutation_SetCell{
+						FamilyName:      "cf",
+						ColumnQualifier: []byte(`col`),
+						TimestampMicros: atomic.AddInt64(&ts, 1000),
+					},
+				},
+			},
+		}
+	}
+	for i := 0; i < 100; i++ {
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			for ctx.Err() == nil {
+				req := &btspb.MutateRowRequest{
+					TableName: name,
+					RowKey:    []byte(fmt.Sprint(rand.Intn(100))),
+					Mutations: ms(),
+				}
+				s.MutateRow(ctx, req)
+			}
+		}()
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			tbl.gc()
+		}()
+	}
+	done := make(chan struct{})
+	go func() {
+		wg.Wait()
+		close(done)
+	}()
+	select {
+	case <-done:
+	case <-time.After(100 * time.Millisecond):
+		t.Error("Concurrent mutations and GCs haven't completed after 100ms")
+	}
+}
diff --git a/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbt.go b/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbt.go
index b009fdf..7f49244 100644
--- a/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbt.go
+++ b/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbt.go
@@ -198,7 +198,7 @@
 	},
 	{
 		Name:  "doc",
-		Desc:  "Print documentation for cbt",
+		Desc:  "Print godoc-suitable documentation for cbt",
 		do:    doDoc,
 		Usage: "cbt doc",
 	},
@@ -228,6 +228,12 @@
 			"cbt ls <table>		List column families in <table>",
 	},
 	{
+		Name:  "mddoc",
+		Desc:  "Print documentation for cbt in Markdown format",
+		do:    doMDDoc,
+		Usage: "cbt mddoc",
+	},
+	{
 		Name: "read",
 		Desc: "Read rows",
 		do:   doRead,
@@ -338,17 +344,20 @@
 
 // to break circular dependencies
 var (
-	doDocFn  func(ctx context.Context, args ...string)
-	doHelpFn func(ctx context.Context, args ...string)
+	doDocFn   func(ctx context.Context, args ...string)
+	doHelpFn  func(ctx context.Context, args ...string)
+	doMDDocFn func(ctx context.Context, args ...string)
 )
 
 func init() {
 	doDocFn = doDocReal
 	doHelpFn = doHelpReal
+	doMDDocFn = doMDDocReal
 }
 
-func doDoc(ctx context.Context, args ...string)  { doDocFn(ctx, args...) }
-func doHelp(ctx context.Context, args ...string) { doHelpFn(ctx, args...) }
+func doDoc(ctx context.Context, args ...string)   { doDocFn(ctx, args...) }
+func doHelp(ctx context.Context, args ...string)  { doHelpFn(ctx, args...) }
+func doMDDoc(ctx context.Context, args ...string) { doMDDocFn(ctx, args...) }
 
 func doDocReal(ctx context.Context, args ...string) {
 	data := map[string]interface{}{
@@ -365,14 +374,16 @@
 	os.Stdout.Write(out)
 }
 
+func indentLines(s, ind string) string {
+	ss := strings.Split(s, "\n")
+	for i, p := range ss {
+		ss[i] = ind + p
+	}
+	return strings.Join(ss, "\n")
+}
+
 var docTemplate = template.Must(template.New("doc").Funcs(template.FuncMap{
-	"indent": func(s, ind string) string {
-		ss := strings.Split(s, "\n")
-		for i, p := range ss {
-			ss[i] = ind + p
-		}
-		return strings.Join(ss, "\n")
-	},
+	"indent": indentLines,
 }).
 	Parse(`
 // DO NOT EDIT. THIS IS AUTOMATICALLY GENERATED.
@@ -501,6 +512,43 @@
 	}
 }
 
+func doMDDocReal(ctx context.Context, args ...string) {
+	data := map[string]interface{}{
+		"Commands": commands,
+	}
+	var buf bytes.Buffer
+	if err := mddocTemplate.Execute(&buf, data); err != nil {
+		log.Fatalf("Bad mddoc template: %v", err)
+	}
+	io.Copy(os.Stdout, &buf)
+}
+
+var mddocTemplate = template.Must(template.New("mddoc").Funcs(template.FuncMap{
+	"indent": indentLines,
+}).
+	Parse(`
+Cbt is a tool for doing basic interactions with Cloud Bigtable.
+
+Usage:
+
+	cbt [options] command [arguments]
+
+The commands are:
+{{range .Commands}}
+	{{printf "%-25s %s" .Name .Desc}}{{end}}
+
+Use "cbt help <command>" for more information about a command.
+
+{{range .Commands}}
+## {{.Desc}}
+
+{{indent .Usage "\t"}}
+
+
+
+{{end}}
+`))
+
 func doRead(ctx context.Context, args ...string) {
 	if len(args) < 1 {
 		log.Fatalf("usage: cbt read <table> [args ...]")
diff --git a/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbtdoc.go b/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbtdoc.go
index d170e99..b8e778f 100644
--- a/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbtdoc.go
+++ b/go/src/google.golang.org/cloud/bigtable/cmd/cbt/cbtdoc.go
@@ -17,11 +17,12 @@
 	deletefamily              Delete a column family
 	deleterow                 Delete a row
 	deletetable               Delete a table
-	doc                       Print documentation for cbt
+	doc                       Print godoc-suitable documentation for cbt
 	help                      Print help text
 	listclusters              List clusters in a project
 	lookup                    Read from a single row
 	ls                        List tables and column families
+	mddoc                     Print documentation for cbt in Markdown format
 	read                      Read rows
 	set                       Set value of a cell
 	setgcpolicy               Set the GC policy for a column family
@@ -77,7 +78,7 @@
 
 
 
-Print documentation for cbt
+Print godoc-suitable documentation for cbt
 
 Usage:
 	cbt doc
@@ -118,6 +119,14 @@
 
 
 
+Print documentation for cbt in Markdown format
+
+Usage:
+	cbt mddoc
+
+
+
+
 Read rows
 
 Usage:
diff --git a/go/src/google.golang.org/cloud/bigtable/gc.go b/go/src/google.golang.org/cloud/bigtable/gc.go
index 84499fc..6814a83 100644
--- a/go/src/google.golang.org/cloud/bigtable/gc.go
+++ b/go/src/google.golang.org/cloud/bigtable/gc.go
@@ -21,7 +21,7 @@
 	"strings"
 	"time"
 
-	durpb "google.golang.org/cloud/bigtable/internal/duration_proto"
+	durpb "github.com/golang/protobuf/ptypes/duration"
 	bttdpb "google.golang.org/cloud/bigtable/internal/table_data_proto"
 )
 
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.pb.go
index d587e7e..f03a8aa 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.pb.go
@@ -23,6 +23,10 @@
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 type StorageType int32
 
 const (
@@ -30,15 +34,20 @@
 	StorageType_STORAGE_UNSPECIFIED StorageType = 0
 	// Data will be stored in SSD, providing low and consistent latencies.
 	StorageType_STORAGE_SSD StorageType = 1
+	// Data will be stored in HDD, providing high and less predictable
+	// latencies.
+	StorageType_STORAGE_HDD StorageType = 2
 )
 
 var StorageType_name = map[int32]string{
 	0: "STORAGE_UNSPECIFIED",
 	1: "STORAGE_SSD",
+	2: "STORAGE_HDD",
 }
 var StorageType_value = map[string]int32{
 	"STORAGE_UNSPECIFIED": 0,
 	"STORAGE_SSD":         1,
+	"STORAGE_HDD":         2,
 }
 
 func (x StorageType) String() string {
@@ -85,7 +94,7 @@
 	// Values are of the form projects/<project>/zones/[a-z][-a-z0-9]*
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The name of this zone as it appears in UIs.
-	DisplayName string `protobuf:"bytes,2,opt,name=display_name" json:"display_name,omitempty"`
+	DisplayName string `protobuf:"bytes,2,opt,name=display_name,json=displayName" json:"display_name,omitempty"`
 	// The current state of this zone.
 	Status Zone_Status `protobuf:"varint,3,opt,name=status,enum=google.bigtable.admin.cluster.v1.Zone_Status" json:"status,omitempty"`
 }
@@ -104,12 +113,12 @@
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The descriptive name for this cluster as it appears in UIs.
 	// Must be unique per zone.
-	DisplayName string `protobuf:"bytes,4,opt,name=display_name" json:"display_name,omitempty"`
+	DisplayName string `protobuf:"bytes,4,opt,name=display_name,json=displayName" json:"display_name,omitempty"`
 	// The number of serve nodes allocated to this cluster.
-	ServeNodes int32 `protobuf:"varint,5,opt,name=serve_nodes" json:"serve_nodes,omitempty"`
+	ServeNodes int32 `protobuf:"varint,5,opt,name=serve_nodes,json=serveNodes" json:"serve_nodes,omitempty"`
 	// What storage type to use for tables in this cluster. Only configurable at
 	// cluster creation time. If unspecified, STORAGE_SSD will be used.
-	DefaultStorageType StorageType `protobuf:"varint,8,opt,name=default_storage_type,enum=google.bigtable.admin.cluster.v1.StorageType" json:"default_storage_type,omitempty"`
+	DefaultStorageType StorageType `protobuf:"varint,8,opt,name=default_storage_type,json=defaultStorageType,enum=google.bigtable.admin.cluster.v1.StorageType" json:"default_storage_type,omitempty"`
 }
 
 func (m *Cluster) Reset()                    { *m = Cluster{} }
@@ -125,28 +134,30 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 364 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x91, 0x4d, 0x4f, 0xc2, 0x40,
-	0x10, 0x86, 0x2d, 0x1f, 0x45, 0x17, 0xa2, 0xcd, 0x42, 0x62, 0x8f, 0x84, 0x78, 0x30, 0x26, 0x2e,
-	0x51, 0x0f, 0x9e, 0x3c, 0xf4, 0x63, 0x25, 0x04, 0x59, 0x1a, 0x0a, 0xf1, 0xe3, 0xb2, 0x59, 0xe8,
-	0xda, 0x34, 0x29, 0x5d, 0xd2, 0x6e, 0x49, 0xf8, 0x15, 0xfe, 0x20, 0xff, 0x9c, 0x4b, 0xa9, 0x86,
-	0x1b, 0xde, 0x66, 0xde, 0xe7, 0x9d, 0x99, 0xbc, 0x19, 0xf0, 0x16, 0x0a, 0x11, 0xc6, 0x1c, 0x85,
-	0x22, 0x66, 0x49, 0x88, 0x44, 0x1a, 0xf6, 0x97, 0xb1, 0xc8, 0x83, 0xfe, 0x22, 0x0a, 0x25, 0x5b,
-	0xc4, 0xbc, 0x1f, 0x25, 0x92, 0xa7, 0x09, 0x8b, 0x95, 0x9e, 0x67, 0xaa, 0xa4, 0x01, 0x93, 0x8c,
-	0xae, 0x53, 0x21, 0xc5, 0x9f, 0x89, 0x1e, 0x32, 0x54, 0x30, 0xd8, 0x2d, 0x37, 0xff, 0x7a, 0x10,
-	0x0b, 0x56, 0x51, 0x82, 0x4a, 0x27, 0xda, 0xdc, 0xf5, 0xbe, 0x35, 0x50, 0xfb, 0x10, 0x09, 0x87,
-	0x2d, 0x50, 0x4b, 0xd8, 0x8a, 0x9b, 0x5a, 0x57, 0xbb, 0x3e, 0x83, 0x1d, 0xd0, 0x0a, 0xa2, 0x6c,
-	0x1d, 0xb3, 0x2d, 0x2d, 0xd4, 0x4a, 0xa1, 0x3e, 0x01, 0x3d, 0x93, 0x4c, 0xe6, 0x99, 0x59, 0x55,
-	0xfd, 0xf9, 0xfd, 0x2d, 0x3a, 0xb6, 0x1f, 0xed, 0x76, 0x23, 0xbf, 0x18, 0xea, 0x79, 0x40, 0xdf,
-	0x57, 0xb0, 0x09, 0x1a, 0x73, 0x32, 0x22, 0x93, 0x57, 0x62, 0x9c, 0x40, 0x1d, 0x54, 0x26, 0x23,
-	0x43, 0x83, 0x97, 0xa0, 0xed, 0xbd, 0x58, 0x84, 0x60, 0x97, 0x8e, 0xad, 0x21, 0x99, 0x61, 0x62,
-	0x11, 0x07, 0x1b, 0x15, 0x68, 0x82, 0x0e, 0x1e, 0xe3, 0xe9, 0x00, 0x13, 0xe7, 0xbd, 0x40, 0x25,
-	0xa9, 0xf6, 0xbe, 0x34, 0xd0, 0x70, 0xf6, 0xc7, 0x8e, 0x04, 0xa8, 0x15, 0x6a, 0x1b, 0x34, 0x33,
-	0x9e, 0x6e, 0x38, 0x4d, 0x44, 0xc0, 0x33, 0xb3, 0xae, 0xc4, 0x3a, 0x1c, 0x81, 0x4e, 0xc0, 0x3f,
-	0x59, 0x1e, 0x4b, 0x9a, 0x49, 0x91, 0xb2, 0x90, 0x53, 0xb9, 0x5d, 0x73, 0xf3, 0xf4, 0xbf, 0x19,
-	0xfd, 0xfd, 0xd4, 0x4c, 0x0d, 0xdd, 0x3c, 0x82, 0xe6, 0x41, 0xbb, 0xcb, 0xe4, 0xcf, 0x26, 0x53,
-	0x6b, 0x80, 0xe9, 0x9c, 0xf8, 0x1e, 0x76, 0x86, 0xcf, 0x43, 0xec, 0xaa, 0xd0, 0x17, 0xca, 0x57,
-	0x02, 0xdf, 0x77, 0x0d, 0xcd, 0xb6, 0xc1, 0xd5, 0x52, 0xac, 0x8e, 0x1e, 0xb3, 0x4d, 0xbb, 0x44,
-	0x65, 0x6e, 0x57, 0x7d, 0xdb, 0xdb, 0x3d, 0xdb, 0xd3, 0x16, 0x7a, 0xf1, 0xf5, 0x87, 0x9f, 0x00,
-	0x00, 0x00, 0xff, 0xff, 0x6a, 0x10, 0x35, 0x5b, 0x51, 0x02, 0x00, 0x00,
+	// 398 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x92, 0xdf, 0x6a, 0xdb, 0x30,
+	0x14, 0xc6, 0xab, 0x34, 0x4d, 0xb7, 0xe3, 0xb1, 0x19, 0xad, 0x30, 0xdf, 0x2d, 0x0b, 0xbb, 0x08,
+	0x83, 0x29, 0x6c, 0x7b, 0x82, 0xd8, 0xd6, 0xda, 0xd0, 0x55, 0x31, 0x76, 0xca, 0xfe, 0xdc, 0x08,
+	0x25, 0xd6, 0x8c, 0x41, 0xb1, 0x8c, 0x2d, 0x17, 0xf2, 0x7a, 0x7b, 0x88, 0x3d, 0xcf, 0xb0, 0xa2,
+	0x0e, 0x43, 0x2f, 0xda, 0x3b, 0xfb, 0xfb, 0x7d, 0xe7, 0x1c, 0x9d, 0x8f, 0x03, 0x3f, 0x0a, 0xad,
+	0x0b, 0x25, 0x49, 0xa1, 0x95, 0xa8, 0x0a, 0xa2, 0x9b, 0x62, 0xb1, 0x53, 0xba, 0xcb, 0x17, 0xdb,
+	0xb2, 0x30, 0x62, 0xab, 0xe4, 0xa2, 0xac, 0x8c, 0x6c, 0x2a, 0xa1, 0x16, 0x3b, 0xd5, 0xb5, 0x46,
+	0x36, 0x3c, 0x17, 0x46, 0xf0, 0xba, 0xd1, 0x46, 0xff, 0x37, 0xf1, 0x21, 0x23, 0x96, 0xe1, 0xa9,
+	0xeb, 0x7c, 0xef, 0x21, 0x22, 0xdf, 0x97, 0x15, 0x71, 0x4e, 0x72, 0xf7, 0x69, 0xf6, 0x17, 0xc1,
+	0xf8, 0x97, 0xae, 0x24, 0xc6, 0x30, 0xae, 0xc4, 0x5e, 0x06, 0x68, 0x8a, 0xe6, 0xcf, 0x53, 0xfb,
+	0x8d, 0xdf, 0xc1, 0x8b, 0xbc, 0x6c, 0x6b, 0x25, 0x0e, 0xdc, 0xb2, 0x91, 0x65, 0x9e, 0xd3, 0x58,
+	0x6f, 0xa1, 0x30, 0x69, 0x8d, 0x30, 0x5d, 0x1b, 0x9c, 0x4e, 0xd1, 0xfc, 0xe5, 0xe7, 0x8f, 0xe4,
+	0xb1, 0x91, 0xa4, 0x1f, 0x47, 0x32, 0x5b, 0x94, 0xba, 0xe2, 0x59, 0x02, 0x93, 0xa3, 0x82, 0x3d,
+	0x38, 0xbf, 0x65, 0xd7, 0x6c, 0xfd, 0x9d, 0xf9, 0x27, 0x78, 0x02, 0xa3, 0xf5, 0xb5, 0x8f, 0xf0,
+	0x1b, 0x78, 0x9d, 0x7c, 0x5b, 0x32, 0x46, 0x63, 0x7e, 0xb3, 0x5c, 0xb1, 0x0d, 0x65, 0x4b, 0x16,
+	0x51, 0x7f, 0x84, 0x03, 0xb8, 0xa0, 0x37, 0x34, 0xbd, 0xa4, 0x2c, 0xfa, 0x69, 0x91, 0x23, 0xa7,
+	0xb3, 0x3f, 0x08, 0xce, 0xa3, 0xe3, 0xd0, 0x27, 0xed, 0x36, 0x7e, 0xb8, 0xdb, 0x5b, 0xf0, 0x5a,
+	0xd9, 0xdc, 0x49, 0x5e, 0xe9, 0x5c, 0xb6, 0xc1, 0xd9, 0x14, 0xcd, 0xcf, 0x52, 0xb0, 0x12, 0xeb,
+	0x15, 0xcc, 0xe1, 0x22, 0x97, 0xbf, 0x45, 0xa7, 0x0c, 0x6f, 0x8d, 0x6e, 0x44, 0x21, 0xb9, 0x39,
+	0xd4, 0x32, 0x78, 0xf6, 0xd4, 0x28, 0xb2, 0x63, 0xd5, 0xe6, 0x50, 0xcb, 0x14, 0xbb, 0x56, 0x03,
+	0xed, 0xc3, 0x15, 0x78, 0x83, 0xdf, 0x3e, 0x86, 0x6c, 0xb3, 0x4e, 0x97, 0x97, 0x94, 0xdf, 0xb2,
+	0x2c, 0xa1, 0xd1, 0xea, 0xeb, 0x8a, 0xc6, 0xfe, 0x09, 0x7e, 0x05, 0xde, 0x3d, 0xc8, 0xb2, 0xd8,
+	0x47, 0x43, 0xe1, 0x2a, 0x8e, 0xfd, 0x51, 0x18, 0xc2, 0xfb, 0x9d, 0xde, 0x3f, 0xfa, 0xa2, 0x30,
+	0x08, 0x1d, 0x72, 0xd9, 0xc5, 0xc2, 0x88, 0xa4, 0xbf, 0xa5, 0x04, 0x6d, 0x27, 0xf6, 0xa8, 0xbe,
+	0xfc, 0x0b, 0x00, 0x00, 0xff, 0xff, 0x4e, 0x74, 0xe9, 0xe5, 0xb0, 0x02, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto b/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto
index af39559..5603160 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto
@@ -59,10 +59,6 @@
   // projects/<project>/zones/<zone>/clusters/[a-z][-a-z0-9]*
   string name = 1;
 
-  // If this cluster has been deleted, the time at which its backup will
-  // be irrevocably destroyed. Omitted otherwise.
-  // This cannot be set directly, only through DeleteCluster.
-
   // The operation currently running on the cluster, if any.
   // This cannot be set directly, only through CreateCluster, UpdateCluster,
   // or UndeleteCluster. Calls to these methods will be rejected if
@@ -86,4 +82,8 @@
 
   // Data will be stored in SSD, providing low and consistent latencies.
   STORAGE_SSD = 1;
+
+  // Data will be stored in HDD, providing high and less predictable
+  // latencies.
+  STORAGE_HDD = 2;
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.pb.go
index c28d6b4..d86a159 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.pb.go
@@ -8,7 +8,7 @@
 import fmt "fmt"
 import math "math"
 import google_bigtable_admin_cluster_v11 "google.golang.org/cloud/bigtable/internal/cluster_data_proto"
-import google_protobuf "google.golang.org/cloud/bigtable/internal/empty"
+import google_protobuf1 "github.com/golang/protobuf/ptypes/empty"
 
 import (
 	context "golang.org/x/net/context"
@@ -24,6 +24,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for BigtableClusterService service
 
 type BigtableClusterServiceClient interface {
@@ -89,7 +93,7 @@
 	// At the cluster's "delete_time":
 	//  * The cluster and *all of its tables* will immediately and irrevocably
 	//    disappear from the API, and their data will be permanently deleted.
-	DeleteCluster(ctx context.Context, in *DeleteClusterRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error)
+	DeleteCluster(ctx context.Context, in *DeleteClusterRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error)
 }
 
 type bigtableClusterServiceClient struct {
@@ -145,8 +149,8 @@
 	return out, nil
 }
 
-func (c *bigtableClusterServiceClient) DeleteCluster(ctx context.Context, in *DeleteClusterRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) {
-	out := new(google_protobuf.Empty)
+func (c *bigtableClusterServiceClient) DeleteCluster(ctx context.Context, in *DeleteClusterRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error) {
+	out := new(google_protobuf1.Empty)
 	err := grpc.Invoke(ctx, "/google.bigtable.admin.cluster.v1.BigtableClusterService/DeleteCluster", in, out, c.cc, opts...)
 	if err != nil {
 		return nil, err
@@ -219,83 +223,119 @@
 	// At the cluster's "delete_time":
 	//  * The cluster and *all of its tables* will immediately and irrevocably
 	//    disappear from the API, and their data will be permanently deleted.
-	DeleteCluster(context.Context, *DeleteClusterRequest) (*google_protobuf.Empty, error)
+	DeleteCluster(context.Context, *DeleteClusterRequest) (*google_protobuf1.Empty, error)
 }
 
 func RegisterBigtableClusterServiceServer(s *grpc.Server, srv BigtableClusterServiceServer) {
 	s.RegisterService(&_BigtableClusterService_serviceDesc, srv)
 }
 
-func _BigtableClusterService_ListZones_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_ListZones_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(ListZonesRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).ListZones(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).ListZones(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/ListZones",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).ListZones(ctx, req.(*ListZonesRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableClusterService_GetCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_GetCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(GetClusterRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).GetCluster(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).GetCluster(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/GetCluster",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).GetCluster(ctx, req.(*GetClusterRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableClusterService_ListClusters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_ListClusters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(ListClustersRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).ListClusters(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).ListClusters(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/ListClusters",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).ListClusters(ctx, req.(*ListClustersRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableClusterService_CreateCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_CreateCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(CreateClusterRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).CreateCluster(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).CreateCluster(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/CreateCluster",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).CreateCluster(ctx, req.(*CreateClusterRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableClusterService_UpdateCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_UpdateCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(google_bigtable_admin_cluster_v11.Cluster)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).UpdateCluster(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).UpdateCluster(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/UpdateCluster",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).UpdateCluster(ctx, req.(*google_bigtable_admin_cluster_v11.Cluster))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableClusterService_DeleteCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableClusterService_DeleteCluster_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(DeleteClusterRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableClusterServiceServer).DeleteCluster(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableClusterServiceServer).DeleteCluster(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.cluster.v1.BigtableClusterService/DeleteCluster",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableClusterServiceServer).DeleteCluster(ctx, req.(*DeleteClusterRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _BigtableClusterService_serviceDesc = grpc.ServiceDesc{
@@ -331,26 +371,26 @@
 }
 
 var fileDescriptor1 = []byte{
-	// 334 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x92, 0x3d, 0x4b, 0xf4, 0x40,
-	0x14, 0x85, 0xf7, 0x2d, 0x5e, 0xc1, 0xc1, 0x6d, 0xa6, 0xd8, 0x62, 0xb1, 0x90, 0xc5, 0xc6, 0x66,
-	0x82, 0xbb, 0x68, 0x63, 0x97, 0xf5, 0xa3, 0xb1, 0x58, 0x14, 0x41, 0x2c, 0x0c, 0x93, 0xe4, 0x3a,
-	0x04, 0x26, 0x33, 0x31, 0x77, 0x12, 0xb0, 0xf2, 0xc7, 0xf9, 0xc7, 0xcc, 0xc7, 0x4c, 0x74, 0x45,
-	0x49, 0x22, 0xd8, 0x84, 0x30, 0x73, 0xce, 0x79, 0xee, 0x49, 0x2e, 0x79, 0x14, 0x5a, 0x0b, 0x09,
-	0x4c, 0x68, 0xc9, 0x95, 0x60, 0x3a, 0x17, 0x5e, 0x24, 0x75, 0x11, 0x7b, 0x61, 0x22, 0x0c, 0x0f,
-	0x25, 0x78, 0x89, 0x32, 0x90, 0x2b, 0x2e, 0xab, 0xf3, 0x02, 0xab, 0xd7, 0x00, 0x21, 0x2f, 0x93,
-	0x08, 0x82, 0x2c, 0xd7, 0x46, 0x77, 0xba, 0xe0, 0xcb, 0x35, 0x6b, 0xae, 0xe9, 0x81, 0xcd, 0x77,
-	0x32, 0xc6, 0xe3, 0x34, 0x51, 0xcc, 0x8a, 0x59, 0x79, 0x3c, 0xbf, 0x1f, 0x3f, 0x41, 0xcc, 0x0d,
-	0xff, 0x09, 0x5f, 0xdf, 0xb5, 0xec, 0xb9, 0xf8, 0xab, 0x6e, 0x41, 0x0a, 0x88, 0x5c, 0x00, 0x5a,
-	0xd0, 0xd9, 0x70, 0x10, 0xa4, 0x99, 0x79, 0x69, 0x9f, 0xad, 0x79, 0xf9, 0xf6, 0x9f, 0xcc, 0x7c,
-	0xab, 0x5b, 0xb7, 0x9c, 0xdb, 0x16, 0x43, 0x4b, 0xb2, 0x7b, 0x9d, 0xa0, 0x79, 0xd0, 0x0a, 0x90,
-	0x2e, 0x59, 0xdf, 0xa7, 0x64, 0x9d, 0xf8, 0x06, 0x9e, 0x0b, 0x40, 0x33, 0x5f, 0x8d, 0xf2, 0x60,
-	0xa6, 0x15, 0xc2, 0x62, 0x42, 0x15, 0x21, 0x57, 0x60, 0xec, 0x30, 0x74, 0x40, 0xc8, 0x87, 0xda,
-	0x91, 0x8f, 0xfa, 0x4d, 0xd6, 0x51, 0xf1, 0x5e, 0xc9, 0x5e, 0x3d, 0x86, 0x3d, 0x40, 0x7a, 0x32,
-	0x6c, 0x6c, 0xa7, 0x77, 0xcc, 0xd3, 0xb1, 0xb6, 0xae, 0xb0, 0x21, 0xd3, 0x75, 0x0e, 0xdc, 0xb8,
-	0x1f, 0x40, 0x07, 0x44, 0x6d, 0x19, 0x7e, 0x55, 0x5b, 0x90, 0xe9, 0x5d, 0x16, 0x7f, 0xa2, 0x0e,
-	0x77, 0x8f, 0x03, 0x71, 0x32, 0x3d, 0x07, 0x09, 0xa3, 0xea, 0x6d, 0x19, 0x5c, 0xbd, 0x99, 0xf3,
-	0x35, 0xab, 0x1b, 0x16, 0x4f, 0xec, 0xa2, 0xde, 0xe4, 0xc5, 0xc4, 0xbf, 0x24, 0x87, 0x91, 0x4e,
-	0x7b, 0x63, 0xfd, 0xfd, 0xef, 0x57, 0x1d, 0x37, 0x75, 0xe0, 0xe6, 0x5f, 0xb8, 0xd3, 0x24, 0xaf,
-	0xde, 0x03, 0x00, 0x00, 0xff, 0xff, 0x37, 0xee, 0x22, 0xa1, 0x98, 0x04, 0x00, 0x00,
+	// 335 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x92, 0xbf, 0x4e, 0xf3, 0x30,
+	0x14, 0xc5, 0xfb, 0x0d, 0x1f, 0x12, 0x16, 0x5d, 0x3c, 0x74, 0x28, 0x0c, 0xa8, 0x62, 0x61, 0xb1,
+	0x45, 0x2b, 0x78, 0x80, 0x96, 0x3f, 0x0b, 0x43, 0x05, 0x42, 0x42, 0x0c, 0x44, 0x4e, 0x72, 0xb1,
+	0x22, 0x39, 0x76, 0xf0, 0x75, 0x2a, 0x31, 0xf1, 0x70, 0xbc, 0x18, 0x4a, 0x62, 0x07, 0x8a, 0x40,
+	0x49, 0x90, 0x18, 0x93, 0x7b, 0xce, 0xef, 0xdc, 0x93, 0x5c, 0xf2, 0x28, 0x8d, 0x91, 0x0a, 0x98,
+	0x34, 0x4a, 0x68, 0xc9, 0x8c, 0x95, 0x3c, 0x51, 0xa6, 0x4c, 0x79, 0x9c, 0x49, 0x27, 0x62, 0x05,
+	0x3c, 0xd3, 0x0e, 0xac, 0x16, 0x8a, 0x27, 0xaa, 0x44, 0x07, 0x36, 0x42, 0xb0, 0x9b, 0x2c, 0x81,
+	0xa8, 0xb0, 0xc6, 0x99, 0x56, 0x17, 0x7d, 0x19, 0xb3, 0x7a, 0x4c, 0x0f, 0x3d, 0x3f, 0xc8, 0x98,
+	0x48, 0xf3, 0x4c, 0x33, 0x2f, 0x66, 0x9b, 0x93, 0xe9, 0xfd, 0xf0, 0x0d, 0x52, 0xe1, 0xc4, 0x4f,
+	0xf1, 0xd5, 0xac, 0xc9, 0x9e, 0xca, 0xbf, 0xea, 0x16, 0xe5, 0x80, 0x28, 0x24, 0xa0, 0x0f, 0xda,
+	0x6f, 0x82, 0x78, 0xfd, 0x14, 0x97, 0x4f, 0x1c, 0xf2, 0xc2, 0xbd, 0x34, 0xc3, 0xf9, 0xdb, 0x7f,
+	0x32, 0x59, 0x7a, 0xd0, 0xaa, 0xe1, 0xdc, 0x36, 0x18, 0xba, 0x21, 0xbb, 0xd7, 0x19, 0xba, 0x07,
+	0xa3, 0x01, 0xe9, 0x9c, 0x75, 0x7d, 0x2a, 0xd6, 0x8a, 0x6f, 0xe0, 0xb9, 0x04, 0x74, 0xd3, 0xc5,
+	0x20, 0x0f, 0x16, 0x46, 0x23, 0xcc, 0x46, 0x54, 0x13, 0x72, 0x05, 0xce, 0x2f, 0x43, 0x7b, 0x40,
+	0x3e, 0xd4, 0x21, 0xf9, 0xb8, 0xdb, 0xe4, 0x1d, 0xb3, 0x11, 0x7d, 0x25, 0x7b, 0xd5, 0x1a, 0xfe,
+	0x05, 0xd2, 0xd3, 0x7e, 0x6b, 0x07, 0x7d, 0xc8, 0x3c, 0x1b, 0x6a, 0x6b, 0x0b, 0x3b, 0x32, 0x5e,
+	0x59, 0x10, 0x2e, 0xfc, 0x00, 0xda, 0x03, 0xb5, 0x65, 0xf8, 0x55, 0x6d, 0x49, 0xc6, 0x77, 0x45,
+	0xfa, 0x29, 0xb5, 0xbf, 0x7b, 0x58, 0x90, 0x20, 0xe3, 0x73, 0x50, 0x30, 0xa8, 0xde, 0x96, 0x21,
+	0xd4, 0x9b, 0x04, 0x5f, 0xb8, 0x64, 0x76, 0x51, 0x5d, 0xf2, 0x6c, 0xb4, 0xbc, 0x24, 0x47, 0x89,
+	0xc9, 0x3b, 0xb1, 0xcb, 0x83, 0xef, 0x4f, 0x1d, 0xd7, 0x15, 0x70, 0xfd, 0x2f, 0xde, 0xa9, 0xc9,
+	0x8b, 0xf7, 0x00, 0x00, 0x00, 0xff, 0xff, 0x16, 0xbd, 0x50, 0x40, 0x78, 0x04, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.proto b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.proto
index 6243dcf..c9f3de8 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service.proto
@@ -18,7 +18,7 @@
 
 import "google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto";
 import "google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.proto";
-import "google.golang.org/cloud/bigtable/internal/empty/empty.proto";
+import "google/protobuf/empty.proto";
 
 option java_multiple_files = true;
 option java_outer_classname = "BigtableClusterServicesProto";
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.pb.go
index 1776b7f..13d3134 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.pb.go
@@ -28,12 +28,17 @@
 import fmt "fmt"
 import math "math"
 import google_bigtable_admin_cluster_v11 "google.golang.org/cloud/bigtable/internal/cluster_data_proto"
+import google_protobuf "github.com/golang/protobuf/ptypes/timestamp"
 
 // Reference imports to suppress errors if they are not otherwise used.
 var _ = proto.Marshal
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 // Request message for BigtableClusterService.ListZones.
 type ListZonesRequest struct {
 	// The unique name of the project for which a list of supported zones is
@@ -94,7 +99,7 @@
 	// The list of requested Clusters.
 	Clusters []*google_bigtable_admin_cluster_v11.Cluster `protobuf:"bytes,1,rep,name=clusters" json:"clusters,omitempty"`
 	// The zones for which clusters could not be retrieved.
-	FailedZones []*google_bigtable_admin_cluster_v11.Zone `protobuf:"bytes,2,rep,name=failed_zones" json:"failed_zones,omitempty"`
+	FailedZones []*google_bigtable_admin_cluster_v11.Zone `protobuf:"bytes,2,rep,name=failed_zones,json=failedZones" json:"failed_zones,omitempty"`
 }
 
 func (m *ListClustersResponse) Reset()                    { *m = ListClustersResponse{} }
@@ -124,7 +129,7 @@
 	// The id to be used when referring to the new cluster within its zone,
 	// e.g. just the "test-cluster" section of the full name
 	// "projects/<project>/zones/<zone>/clusters/test-cluster".
-	ClusterId string `protobuf:"bytes,2,opt,name=cluster_id" json:"cluster_id,omitempty"`
+	ClusterId string `protobuf:"bytes,2,opt,name=cluster_id,json=clusterId" json:"cluster_id,omitempty"`
 	// The cluster to create.
 	// The "name", "delete_time", and "current_operation" fields must be left
 	// blank.
@@ -147,7 +152,11 @@
 // BigtableClusterService.CreateCluster.
 type CreateClusterMetadata struct {
 	// The request which prompted the creation of this operation.
-	OriginalRequest *CreateClusterRequest `protobuf:"bytes,1,opt,name=original_request" json:"original_request,omitempty"`
+	OriginalRequest *CreateClusterRequest `protobuf:"bytes,1,opt,name=original_request,json=originalRequest" json:"original_request,omitempty"`
+	// The time at which original_request was received.
+	RequestTime *google_protobuf.Timestamp `protobuf:"bytes,2,opt,name=request_time,json=requestTime" json:"request_time,omitempty"`
+	// The time at which this operation failed or was completed successfully.
+	FinishTime *google_protobuf.Timestamp `protobuf:"bytes,3,opt,name=finish_time,json=finishTime" json:"finish_time,omitempty"`
 }
 
 func (m *CreateClusterMetadata) Reset()                    { *m = CreateClusterMetadata{} }
@@ -162,11 +171,33 @@
 	return nil
 }
 
+func (m *CreateClusterMetadata) GetRequestTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.RequestTime
+	}
+	return nil
+}
+
+func (m *CreateClusterMetadata) GetFinishTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.FinishTime
+	}
+	return nil
+}
+
 // Metadata type for the operation returned by
 // BigtableClusterService.UpdateCluster.
 type UpdateClusterMetadata struct {
 	// The request which prompted the creation of this operation.
-	OriginalRequest *google_bigtable_admin_cluster_v11.Cluster `protobuf:"bytes,1,opt,name=original_request" json:"original_request,omitempty"`
+	OriginalRequest *google_bigtable_admin_cluster_v11.Cluster `protobuf:"bytes,1,opt,name=original_request,json=originalRequest" json:"original_request,omitempty"`
+	// The time at which original_request was received.
+	RequestTime *google_protobuf.Timestamp `protobuf:"bytes,2,opt,name=request_time,json=requestTime" json:"request_time,omitempty"`
+	// The time at which this operation was cancelled. If set, this operation is
+	// in the process of undoing itself (which is guaranteed to succeed) and
+	// cannot be cancelled again.
+	CancelTime *google_protobuf.Timestamp `protobuf:"bytes,3,opt,name=cancel_time,json=cancelTime" json:"cancel_time,omitempty"`
+	// The time at which this operation failed or was completed successfully.
+	FinishTime *google_protobuf.Timestamp `protobuf:"bytes,4,opt,name=finish_time,json=finishTime" json:"finish_time,omitempty"`
 }
 
 func (m *UpdateClusterMetadata) Reset()                    { *m = UpdateClusterMetadata{} }
@@ -181,6 +212,27 @@
 	return nil
 }
 
+func (m *UpdateClusterMetadata) GetRequestTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.RequestTime
+	}
+	return nil
+}
+
+func (m *UpdateClusterMetadata) GetCancelTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.CancelTime
+	}
+	return nil
+}
+
+func (m *UpdateClusterMetadata) GetFinishTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.FinishTime
+	}
+	return nil
+}
+
 // Request message for BigtableClusterService.DeleteCluster.
 type DeleteClusterRequest struct {
 	// The unique name of the cluster to be deleted.
@@ -208,6 +260,10 @@
 // Metadata type for the operation returned by
 // BigtableClusterService.UndeleteCluster.
 type UndeleteClusterMetadata struct {
+	// The time at which the original request was received.
+	RequestTime *google_protobuf.Timestamp `protobuf:"bytes,1,opt,name=request_time,json=requestTime" json:"request_time,omitempty"`
+	// The time at which this operation failed or was completed successfully.
+	FinishTime *google_protobuf.Timestamp `protobuf:"bytes,2,opt,name=finish_time,json=finishTime" json:"finish_time,omitempty"`
 }
 
 func (m *UndeleteClusterMetadata) Reset()                    { *m = UndeleteClusterMetadata{} }
@@ -215,6 +271,20 @@
 func (*UndeleteClusterMetadata) ProtoMessage()               {}
 func (*UndeleteClusterMetadata) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
 
+func (m *UndeleteClusterMetadata) GetRequestTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.RequestTime
+	}
+	return nil
+}
+
+func (m *UndeleteClusterMetadata) GetFinishTime() *google_protobuf.Timestamp {
+	if m != nil {
+		return m.FinishTime
+	}
+	return nil
+}
+
 func init() {
 	proto.RegisterType((*ListZonesRequest)(nil), "google.bigtable.admin.cluster.v1.ListZonesRequest")
 	proto.RegisterType((*ListZonesResponse)(nil), "google.bigtable.admin.cluster.v1.ListZonesResponse")
@@ -230,30 +300,38 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 391 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x93, 0x4f, 0x4f, 0xea, 0x40,
-	0x14, 0xc5, 0xd3, 0xc7, 0xfb, 0x7b, 0x61, 0x01, 0x7d, 0xa0, 0xe8, 0x0a, 0x0b, 0x21, 0xb8, 0x19,
-	0x22, 0x46, 0x17, 0xea, 0x0a, 0x4c, 0x4c, 0x8c, 0x24, 0x44, 0x43, 0x62, 0x8c, 0x49, 0x33, 0xd0,
-	0xeb, 0x64, 0x92, 0x76, 0x06, 0x3b, 0x03, 0x0b, 0xbf, 0x85, 0xdf, 0xd8, 0x96, 0x4e, 0x8d, 0x10,
-	0xa0, 0x76, 0x47, 0xce, 0x9c, 0xb9, 0xe7, 0x77, 0x98, 0x5e, 0x60, 0x4c, 0x4a, 0xe6, 0x23, 0x61,
-	0xd2, 0xa7, 0x82, 0x11, 0x19, 0xb2, 0xee, 0xd4, 0x97, 0x73, 0xaf, 0x3b, 0xe1, 0x4c, 0xd3, 0x89,
-	0x8f, 0x5d, 0x2e, 0x34, 0x86, 0x82, 0xfa, 0x91, 0x3e, 0x57, 0xd1, 0x4f, 0x57, 0x61, 0xb8, 0xe0,
-	0x53, 0x74, 0x67, 0xa1, 0xd4, 0xf2, 0xd3, 0xe7, 0xae, 0x1f, 0x07, 0xa8, 0x14, 0x65, 0xa8, 0xc8,
-	0xd2, 0x67, 0x37, 0x4c, 0x50, 0xea, 0x27, 0xd4, 0x0b, 0xb8, 0x20, 0xe6, 0x16, 0x59, 0x9c, 0x1c,
-	0x3e, 0xe6, 0x47, 0xf1, 0xa8, 0xa6, 0xdb, 0x38, 0xe2, 0xb3, 0x24, 0xdb, 0x69, 0x40, 0xf9, 0x8e,
-	0x2b, 0xfd, 0x24, 0x05, 0xaa, 0x7b, 0x7c, 0x9d, 0xa3, 0xd2, 0x76, 0x09, 0x7e, 0x0a, 0x1a, 0x60,
-	0xdd, 0x6a, 0x58, 0x9d, 0x7f, 0xce, 0x2d, 0x54, 0xbe, 0x38, 0xd4, 0x4c, 0x0a, 0x85, 0xf6, 0x19,
-	0xfc, 0x7a, 0x8b, 0x85, 0xc8, 0x53, 0xe8, 0x14, 0x7b, 0x6d, 0x92, 0x55, 0x81, 0xc4, 0xf7, 0x9d,
-	0x23, 0xa8, 0xdc, 0xa0, 0x1e, 0x24, 0xea, 0xe6, 0xb8, 0x26, 0xfc, 0x8f, 0xe3, 0x8c, 0x67, 0x0b,
-	0xd3, 0xbb, 0x05, 0xd5, 0x55, 0x97, 0xe1, 0xba, 0x84, 0xbf, 0x26, 0x33, 0x45, 0x3b, 0xce, 0x46,
-	0x33, 0x53, 0xec, 0x2b, 0x28, 0xbd, 0x50, 0xee, 0xa3, 0xe7, 0x26, 0xdd, 0x7e, 0xe4, 0xea, 0xa6,
-	0xa1, 0x3a, 0x08, 0x91, 0x6a, 0xdc, 0x55, 0xcf, 0xb6, 0x01, 0xd2, 0x57, 0xe0, 0x5e, 0x94, 0x10,
-	0x6b, 0x17, 0xf0, 0xc7, 0x68, 0xf5, 0x42, 0x24, 0xe4, 0x61, 0x76, 0x38, 0xd4, 0x56, 0x52, 0x87,
-	0xa8, 0x69, 0xfc, 0xbc, 0xf6, 0x08, 0xca, 0x32, 0xe4, 0x8c, 0x47, 0xdf, 0x84, 0x1b, 0x26, 0x28,
-	0x4b, 0x84, 0x62, 0xef, 0xfc, 0x1b, 0xd3, 0x37, 0x14, 0x71, 0x9e, 0xa1, 0x36, 0x9e, 0x79, 0x1b,
-	0xa2, 0x06, 0x5b, 0xa3, 0x72, 0x14, 0x69, 0x41, 0xf5, 0x1a, 0x7d, 0xdc, 0xfd, 0xf7, 0x39, 0x6d,
-	0xd8, 0x1b, 0x0b, 0x2f, 0xdb, 0x77, 0x00, 0xfb, 0x6b, 0xbe, 0x94, 0xb6, 0x3f, 0x84, 0xd6, 0x54,
-	0x06, 0x99, 0x60, 0xfd, 0x66, 0xdf, 0x1c, 0x99, 0x01, 0x0f, 0xc9, 0xf2, 0x0e, 0xcd, 0xee, 0x8e,
-	0xe2, 0xf5, 0x19, 0x59, 0x93, 0xdf, 0xcb, 0x3d, 0x3a, 0xfd, 0x08, 0x00, 0x00, 0xff, 0xff, 0x0b,
-	0xf1, 0x19, 0x19, 0x2e, 0x04, 0x00, 0x00,
+	// 519 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x55, 0x5d, 0x6b, 0x13, 0x41,
+	0x14, 0x65, 0x92, 0xfa, 0xd1, 0xbb, 0x05, 0xdb, 0x35, 0xd5, 0x10, 0x10, 0xc3, 0x2a, 0xb5, 0x15,
+	0xd9, 0xc5, 0x08, 0xbe, 0xa8, 0x2f, 0x89, 0x22, 0x05, 0x03, 0x35, 0xb6, 0x20, 0xbe, 0x2c, 0x93,
+	0xdd, 0x9b, 0x75, 0x60, 0x76, 0x26, 0xee, 0x4c, 0xfa, 0xe0, 0x8f, 0xf0, 0xcd, 0xbf, 0x20, 0xfe,
+	0x42, 0x9f, 0x65, 0x77, 0x66, 0x82, 0x0d, 0x69, 0x37, 0x8b, 0xf4, 0x6d, 0x77, 0xee, 0x39, 0xf7,
+	0x9c, 0x73, 0xe7, 0xc2, 0x40, 0x96, 0x49, 0x99, 0x71, 0x0c, 0x33, 0xc9, 0xa9, 0xc8, 0x42, 0x59,
+	0x64, 0x51, 0xc2, 0xe5, 0x22, 0x8d, 0xa6, 0x2c, 0xd3, 0x74, 0xca, 0x31, 0x62, 0x42, 0x63, 0x21,
+	0x28, 0x8f, 0x12, 0xbe, 0x50, 0x1a, 0x8b, 0x58, 0x61, 0x71, 0xce, 0x12, 0x8c, 0xe7, 0x85, 0xd4,
+	0x72, 0x89, 0x8b, 0x57, 0xcb, 0x39, 0x2a, 0x45, 0x33, 0x54, 0x61, 0x85, 0xf3, 0xfb, 0x56, 0xc8,
+	0xe1, 0x43, 0x9a, 0xe6, 0x4c, 0x84, 0x96, 0x15, 0x9e, 0x3f, 0xef, 0x7d, 0x6e, 0x6e, 0x25, 0xa5,
+	0x9a, 0x5e, 0xe6, 0xa3, 0xac, 0x19, 0xed, 0xde, 0x43, 0xd3, 0x39, 0xaa, 0xfe, 0xa6, 0x8b, 0x59,
+	0xa4, 0x59, 0x8e, 0x4a, 0xd3, 0x7c, 0x6e, 0x00, 0xc1, 0x01, 0xec, 0x7e, 0x60, 0x4a, 0x7f, 0x91,
+	0x02, 0xd5, 0x04, 0xbf, 0x2d, 0x50, 0x69, 0xdf, 0x87, 0x2d, 0x41, 0x73, 0xec, 0x92, 0x3e, 0x39,
+	0xdc, 0x9e, 0x54, 0xdf, 0xc1, 0x47, 0xd8, 0xfb, 0x07, 0xa7, 0xe6, 0x52, 0x28, 0xf4, 0x5f, 0xc3,
+	0x8d, 0xef, 0xe5, 0x41, 0x97, 0xf4, 0xdb, 0x87, 0xde, 0xe0, 0x20, 0xac, 0x4b, 0x1a, 0x96, 0xfc,
+	0x89, 0x21, 0x05, 0x4f, 0x60, 0xef, 0x3d, 0xea, 0x91, 0x29, 0x5e, 0xa5, 0x7d, 0x04, 0x77, 0x4b,
+	0x6d, 0x8b, 0xbc, 0xd2, 0xe6, 0x6f, 0x02, 0x9d, 0x8b, 0x58, 0x6b, 0xf5, 0x1d, 0xdc, 0xb6, 0x36,
+	0x9c, 0xdb, 0xa3, 0x7a, 0xb7, 0xce, 0xdb, 0x92, 0xea, 0x1f, 0xc3, 0xce, 0x8c, 0x32, 0x8e, 0x69,
+	0x6c, 0x82, 0xb7, 0x1a, 0x05, 0xf7, 0x0c, 0xb7, 0x1a, 0x62, 0xf0, 0x83, 0x40, 0x67, 0x54, 0x20,
+	0xd5, 0x58, 0x3f, 0x02, 0xff, 0x01, 0x80, 0xbb, 0x5d, 0x96, 0x76, 0x5b, 0x55, 0x65, 0xdb, 0x9e,
+	0x1c, 0xa7, 0xfe, 0x08, 0x6e, 0xd9, 0x9f, 0x6e, 0xbb, 0x4f, 0x9a, 0x85, 0x73, 0xcc, 0xe0, 0x0f,
+	0x81, 0xfd, 0x0b, 0x86, 0xc6, 0xa8, 0x69, 0xb9, 0x4b, 0x3e, 0x85, 0x5d, 0x59, 0xb0, 0x8c, 0x09,
+	0xca, 0xe3, 0xc2, 0xb8, 0xac, 0xdc, 0x79, 0x83, 0x97, 0x1b, 0xe8, 0xac, 0xc9, 0x38, 0xb9, 0xe3,
+	0xfa, 0xb9, 0xd0, 0x6f, 0x60, 0xc7, 0x76, 0x8e, 0xcb, 0x15, 0xad, 0x22, 0x7a, 0x83, 0x9e, 0x6b,
+	0xef, 0xf6, 0x37, 0x3c, 0x75, 0xfb, 0x3b, 0xf1, 0x2c, 0xbe, 0x3c, 0xf1, 0x5f, 0x81, 0x37, 0x63,
+	0x82, 0xa9, 0xaf, 0x86, 0xdd, 0xae, 0x65, 0x83, 0x81, 0x97, 0x07, 0xc1, 0xaf, 0x16, 0xec, 0x9f,
+	0xcd, 0xd3, 0x35, 0xc1, 0x4f, 0x2f, 0x0d, 0xde, 0x60, 0xc0, 0xd7, 0x90, 0x35, 0xa1, 0x22, 0x41,
+	0xbe, 0x71, 0x56, 0x03, 0x5f, 0x37, 0xa8, 0xad, 0x46, 0x83, 0x7a, 0x0a, 0x9d, 0xb7, 0xc8, 0x71,
+	0x93, 0x8d, 0x0d, 0x9e, 0xc1, 0xbd, 0x33, 0x91, 0x6e, 0x8a, 0xfe, 0x49, 0xe0, 0xfe, 0x0a, 0x7c,
+	0x79, 0x09, 0xab, 0xe3, 0x22, 0xff, 0xb5, 0x1a, 0xad, 0x26, 0x89, 0x87, 0x63, 0x78, 0x9c, 0xc8,
+	0xbc, 0xf6, 0xae, 0x87, 0x8f, 0x86, 0xb6, 0x64, 0xcd, 0x7f, 0x32, 0x4f, 0xc1, 0xd8, 0xbe, 0x04,
+	0x27, 0xa5, 0xca, 0x09, 0x99, 0xde, 0xac, 0xe4, 0x5e, 0xfc, 0x0d, 0x00, 0x00, 0xff, 0xff, 0x44,
+	0xc9, 0x19, 0xcf, 0x7c, 0x06, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.proto b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.proto
index 2e5d4a7..c7a27df 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/cluster_service_proto/bigtable_cluster_service_messages.proto
@@ -17,6 +17,7 @@
 package google.bigtable.admin.cluster.v1;
 
 import "google.golang.org/cloud/bigtable/internal/cluster_data_proto/bigtable_cluster_data.proto";
+import "google/protobuf/timestamp.proto";
 
 option java_multiple_files = true;
 option java_outer_classname = "BigtableClusterServiceMessagesProto";
@@ -84,8 +85,10 @@
   CreateClusterRequest original_request = 1;
 
   // The time at which original_request was received.
+  google.protobuf.Timestamp request_time = 2;
 
   // The time at which this operation failed or was completed successfully.
+  google.protobuf.Timestamp finish_time = 3;
 }
 
 // Metadata type for the operation returned by
@@ -95,12 +98,15 @@
   Cluster original_request = 1;
 
   // The time at which original_request was received.
+  google.protobuf.Timestamp request_time = 2;
 
   // The time at which this operation was cancelled. If set, this operation is
   // in the process of undoing itself (which is guaranteed to succeed) and
   // cannot be cancelled again.
+  google.protobuf.Timestamp cancel_time = 3;
 
   // The time at which this operation failed or was completed successfully.
+  google.protobuf.Timestamp finish_time = 4;
 }
 
 // Request message for BigtableClusterService.DeleteCluster.
@@ -121,6 +127,8 @@
 // BigtableClusterService.UndeleteCluster.
 message UndeleteClusterMetadata {
   // The time at which the original request was received.
+  google.protobuf.Timestamp request_time = 1;
 
   // The time at which this operation failed or was completed successfully.
+  google.protobuf.Timestamp finish_time = 2;
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.pb.go
index ecf30f5..7778cdb 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.pb.go
@@ -14,6 +14,7 @@
 	Column
 	Cell
 	RowRange
+	RowSet
 	ColumnRange
 	TimestampRange
 	ValueRange
@@ -32,6 +33,10 @@
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 // Specifies the complete (requested) contents of a single row of a table.
 // Rows which exceed 256MiB in size cannot be read in full.
 type Row struct {
@@ -113,7 +118,7 @@
 	// a coarser "granularity" to further restrict the allowed values. For
 	// example, a table which specifies millisecond granularity will only allow
 	// values of "timestamp_micros" which are multiples of 1000.
-	TimestampMicros int64 `protobuf:"varint,1,opt,name=timestamp_micros" json:"timestamp_micros,omitempty"`
+	TimestampMicros int64 `protobuf:"varint,1,opt,name=timestamp_micros,json=timestampMicros" json:"timestamp_micros,omitempty"`
 	// The value stored in the cell.
 	// May contain any byte string, including the empty string, up to 100MiB in
 	// length.
@@ -130,9 +135,9 @@
 // Specifies a contiguous range of rows.
 type RowRange struct {
 	// Inclusive lower bound. If left empty, interpreted as the empty string.
-	StartKey []byte `protobuf:"bytes,2,opt,name=start_key,proto3" json:"start_key,omitempty"`
+	StartKey []byte `protobuf:"bytes,2,opt,name=start_key,json=startKey,proto3" json:"start_key,omitempty"`
 	// Exclusive upper bound. If left empty, interpreted as infinity.
-	EndKey []byte `protobuf:"bytes,3,opt,name=end_key,proto3" json:"end_key,omitempty"`
+	EndKey []byte `protobuf:"bytes,3,opt,name=end_key,json=endKey,proto3" json:"end_key,omitempty"`
 }
 
 func (m *RowRange) Reset()                    { *m = RowRange{} }
@@ -140,13 +145,33 @@
 func (*RowRange) ProtoMessage()               {}
 func (*RowRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
 
+// Specifies a non-contiguous set of rows.
+type RowSet struct {
+	// Single rows included in the set.
+	RowKeys [][]byte `protobuf:"bytes,1,rep,name=row_keys,json=rowKeys,proto3" json:"row_keys,omitempty"`
+	// Contiguous row ranges included in the set.
+	RowRanges []*RowRange `protobuf:"bytes,2,rep,name=row_ranges,json=rowRanges" json:"row_ranges,omitempty"`
+}
+
+func (m *RowSet) Reset()                    { *m = RowSet{} }
+func (m *RowSet) String() string            { return proto.CompactTextString(m) }
+func (*RowSet) ProtoMessage()               {}
+func (*RowSet) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+
+func (m *RowSet) GetRowRanges() []*RowRange {
+	if m != nil {
+		return m.RowRanges
+	}
+	return nil
+}
+
 // Specifies a contiguous range of columns within a single column family.
 // The range spans from <column_family>:<start_qualifier> to
 // <column_family>:<end_qualifier>, where both bounds can be either inclusive or
 // exclusive.
 type ColumnRange struct {
 	// The name of the column family within which this range falls.
-	FamilyName string `protobuf:"bytes,1,opt,name=family_name" json:"family_name,omitempty"`
+	FamilyName string `protobuf:"bytes,1,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
 	// The column qualifier at which to start the range (within 'column_family').
 	// If neither field is set, interpreted as the empty string, inclusive.
 	//
@@ -166,7 +191,7 @@
 func (m *ColumnRange) Reset()                    { *m = ColumnRange{} }
 func (m *ColumnRange) String() string            { return proto.CompactTextString(m) }
 func (*ColumnRange) ProtoMessage()               {}
-func (*ColumnRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+func (*ColumnRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
 
 type isColumnRange_StartQualifier interface {
 	isColumnRange_StartQualifier()
@@ -176,16 +201,16 @@
 }
 
 type ColumnRange_StartQualifierInclusive struct {
-	StartQualifierInclusive []byte `protobuf:"bytes,2,opt,name=start_qualifier_inclusive,proto3,oneof"`
+	StartQualifierInclusive []byte `protobuf:"bytes,2,opt,name=start_qualifier_inclusive,json=startQualifierInclusive,proto3,oneof"`
 }
 type ColumnRange_StartQualifierExclusive struct {
-	StartQualifierExclusive []byte `protobuf:"bytes,3,opt,name=start_qualifier_exclusive,proto3,oneof"`
+	StartQualifierExclusive []byte `protobuf:"bytes,3,opt,name=start_qualifier_exclusive,json=startQualifierExclusive,proto3,oneof"`
 }
 type ColumnRange_EndQualifierInclusive struct {
-	EndQualifierInclusive []byte `protobuf:"bytes,4,opt,name=end_qualifier_inclusive,proto3,oneof"`
+	EndQualifierInclusive []byte `protobuf:"bytes,4,opt,name=end_qualifier_inclusive,json=endQualifierInclusive,proto3,oneof"`
 }
 type ColumnRange_EndQualifierExclusive struct {
-	EndQualifierExclusive []byte `protobuf:"bytes,5,opt,name=end_qualifier_exclusive,proto3,oneof"`
+	EndQualifierExclusive []byte `protobuf:"bytes,5,opt,name=end_qualifier_exclusive,json=endQualifierExclusive,proto3,oneof"`
 }
 
 func (*ColumnRange_StartQualifierInclusive) isColumnRange_StartQualifier() {}
@@ -345,15 +370,15 @@
 // Specified a contiguous range of microsecond timestamps.
 type TimestampRange struct {
 	// Inclusive lower bound. If left empty, interpreted as 0.
-	StartTimestampMicros int64 `protobuf:"varint,1,opt,name=start_timestamp_micros" json:"start_timestamp_micros,omitempty"`
+	StartTimestampMicros int64 `protobuf:"varint,1,opt,name=start_timestamp_micros,json=startTimestampMicros" json:"start_timestamp_micros,omitempty"`
 	// Exclusive upper bound. If left empty, interpreted as infinity.
-	EndTimestampMicros int64 `protobuf:"varint,2,opt,name=end_timestamp_micros" json:"end_timestamp_micros,omitempty"`
+	EndTimestampMicros int64 `protobuf:"varint,2,opt,name=end_timestamp_micros,json=endTimestampMicros" json:"end_timestamp_micros,omitempty"`
 }
 
 func (m *TimestampRange) Reset()                    { *m = TimestampRange{} }
 func (m *TimestampRange) String() string            { return proto.CompactTextString(m) }
 func (*TimestampRange) ProtoMessage()               {}
-func (*TimestampRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+func (*TimestampRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
 
 // Specifies a contiguous range of raw byte values.
 type ValueRange struct {
@@ -376,7 +401,7 @@
 func (m *ValueRange) Reset()                    { *m = ValueRange{} }
 func (m *ValueRange) String() string            { return proto.CompactTextString(m) }
 func (*ValueRange) ProtoMessage()               {}
-func (*ValueRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+func (*ValueRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
 
 type isValueRange_StartValue interface {
 	isValueRange_StartValue()
@@ -386,16 +411,16 @@
 }
 
 type ValueRange_StartValueInclusive struct {
-	StartValueInclusive []byte `protobuf:"bytes,1,opt,name=start_value_inclusive,proto3,oneof"`
+	StartValueInclusive []byte `protobuf:"bytes,1,opt,name=start_value_inclusive,json=startValueInclusive,proto3,oneof"`
 }
 type ValueRange_StartValueExclusive struct {
-	StartValueExclusive []byte `protobuf:"bytes,2,opt,name=start_value_exclusive,proto3,oneof"`
+	StartValueExclusive []byte `protobuf:"bytes,2,opt,name=start_value_exclusive,json=startValueExclusive,proto3,oneof"`
 }
 type ValueRange_EndValueInclusive struct {
-	EndValueInclusive []byte `protobuf:"bytes,3,opt,name=end_value_inclusive,proto3,oneof"`
+	EndValueInclusive []byte `protobuf:"bytes,3,opt,name=end_value_inclusive,json=endValueInclusive,proto3,oneof"`
 }
 type ValueRange_EndValueExclusive struct {
-	EndValueExclusive []byte `protobuf:"bytes,4,opt,name=end_value_exclusive,proto3,oneof"`
+	EndValueExclusive []byte `protobuf:"bytes,4,opt,name=end_value_exclusive,json=endValueExclusive,proto3,oneof"`
 }
 
 func (*ValueRange_StartValueInclusive) isValueRange_StartValue() {}
@@ -615,7 +640,7 @@
 func (m *RowFilter) Reset()                    { *m = RowFilter{} }
 func (m *RowFilter) String() string            { return proto.CompactTextString(m) }
 func (*RowFilter) ProtoMessage()               {}
-func (*RowFilter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
+func (*RowFilter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
 
 type isRowFilter_Filter interface {
 	isRowFilter_Filter()
@@ -634,49 +659,49 @@
 	Sink bool `protobuf:"varint,16,opt,name=sink,oneof"`
 }
 type RowFilter_PassAllFilter struct {
-	PassAllFilter bool `protobuf:"varint,17,opt,name=pass_all_filter,oneof"`
+	PassAllFilter bool `protobuf:"varint,17,opt,name=pass_all_filter,json=passAllFilter,oneof"`
 }
 type RowFilter_BlockAllFilter struct {
-	BlockAllFilter bool `protobuf:"varint,18,opt,name=block_all_filter,oneof"`
+	BlockAllFilter bool `protobuf:"varint,18,opt,name=block_all_filter,json=blockAllFilter,oneof"`
 }
 type RowFilter_RowKeyRegexFilter struct {
-	RowKeyRegexFilter []byte `protobuf:"bytes,4,opt,name=row_key_regex_filter,proto3,oneof"`
+	RowKeyRegexFilter []byte `protobuf:"bytes,4,opt,name=row_key_regex_filter,json=rowKeyRegexFilter,proto3,oneof"`
 }
 type RowFilter_RowSampleFilter struct {
-	RowSampleFilter float64 `protobuf:"fixed64,14,opt,name=row_sample_filter,oneof"`
+	RowSampleFilter float64 `protobuf:"fixed64,14,opt,name=row_sample_filter,json=rowSampleFilter,oneof"`
 }
 type RowFilter_FamilyNameRegexFilter struct {
-	FamilyNameRegexFilter string `protobuf:"bytes,5,opt,name=family_name_regex_filter,oneof"`
+	FamilyNameRegexFilter string `protobuf:"bytes,5,opt,name=family_name_regex_filter,json=familyNameRegexFilter,oneof"`
 }
 type RowFilter_ColumnQualifierRegexFilter struct {
-	ColumnQualifierRegexFilter []byte `protobuf:"bytes,6,opt,name=column_qualifier_regex_filter,proto3,oneof"`
+	ColumnQualifierRegexFilter []byte `protobuf:"bytes,6,opt,name=column_qualifier_regex_filter,json=columnQualifierRegexFilter,proto3,oneof"`
 }
 type RowFilter_ColumnRangeFilter struct {
-	ColumnRangeFilter *ColumnRange `protobuf:"bytes,7,opt,name=column_range_filter,oneof"`
+	ColumnRangeFilter *ColumnRange `protobuf:"bytes,7,opt,name=column_range_filter,json=columnRangeFilter,oneof"`
 }
 type RowFilter_TimestampRangeFilter struct {
-	TimestampRangeFilter *TimestampRange `protobuf:"bytes,8,opt,name=timestamp_range_filter,oneof"`
+	TimestampRangeFilter *TimestampRange `protobuf:"bytes,8,opt,name=timestamp_range_filter,json=timestampRangeFilter,oneof"`
 }
 type RowFilter_ValueRegexFilter struct {
-	ValueRegexFilter []byte `protobuf:"bytes,9,opt,name=value_regex_filter,proto3,oneof"`
+	ValueRegexFilter []byte `protobuf:"bytes,9,opt,name=value_regex_filter,json=valueRegexFilter,proto3,oneof"`
 }
 type RowFilter_ValueRangeFilter struct {
-	ValueRangeFilter *ValueRange `protobuf:"bytes,15,opt,name=value_range_filter,oneof"`
+	ValueRangeFilter *ValueRange `protobuf:"bytes,15,opt,name=value_range_filter,json=valueRangeFilter,oneof"`
 }
 type RowFilter_CellsPerRowOffsetFilter struct {
-	CellsPerRowOffsetFilter int32 `protobuf:"varint,10,opt,name=cells_per_row_offset_filter,oneof"`
+	CellsPerRowOffsetFilter int32 `protobuf:"varint,10,opt,name=cells_per_row_offset_filter,json=cellsPerRowOffsetFilter,oneof"`
 }
 type RowFilter_CellsPerRowLimitFilter struct {
-	CellsPerRowLimitFilter int32 `protobuf:"varint,11,opt,name=cells_per_row_limit_filter,oneof"`
+	CellsPerRowLimitFilter int32 `protobuf:"varint,11,opt,name=cells_per_row_limit_filter,json=cellsPerRowLimitFilter,oneof"`
 }
 type RowFilter_CellsPerColumnLimitFilter struct {
-	CellsPerColumnLimitFilter int32 `protobuf:"varint,12,opt,name=cells_per_column_limit_filter,oneof"`
+	CellsPerColumnLimitFilter int32 `protobuf:"varint,12,opt,name=cells_per_column_limit_filter,json=cellsPerColumnLimitFilter,oneof"`
 }
 type RowFilter_StripValueTransformer struct {
-	StripValueTransformer bool `protobuf:"varint,13,opt,name=strip_value_transformer,oneof"`
+	StripValueTransformer bool `protobuf:"varint,13,opt,name=strip_value_transformer,json=stripValueTransformer,oneof"`
 }
 type RowFilter_ApplyLabelTransformer struct {
-	ApplyLabelTransformer string `protobuf:"bytes,19,opt,name=apply_label_transformer,oneof"`
+	ApplyLabelTransformer string `protobuf:"bytes,19,opt,name=apply_label_transformer,json=applyLabelTransformer,oneof"`
 }
 
 func (*RowFilter_Chain_) isRowFilter_Filter()                     {}
@@ -1203,7 +1228,7 @@
 func (m *RowFilter_Chain) Reset()                    { *m = RowFilter_Chain{} }
 func (m *RowFilter_Chain) String() string            { return proto.CompactTextString(m) }
 func (*RowFilter_Chain) ProtoMessage()               {}
-func (*RowFilter_Chain) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8, 0} }
+func (*RowFilter_Chain) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 0} }
 
 func (m *RowFilter_Chain) GetFilters() []*RowFilter {
 	if m != nil {
@@ -1245,7 +1270,7 @@
 func (m *RowFilter_Interleave) Reset()                    { *m = RowFilter_Interleave{} }
 func (m *RowFilter_Interleave) String() string            { return proto.CompactTextString(m) }
 func (*RowFilter_Interleave) ProtoMessage()               {}
-func (*RowFilter_Interleave) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8, 1} }
+func (*RowFilter_Interleave) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 1} }
 
 func (m *RowFilter_Interleave) GetFilters() []*RowFilter {
 	if m != nil {
@@ -1264,20 +1289,20 @@
 type RowFilter_Condition struct {
 	// If "predicate_filter" outputs any cells, then "true_filter" will be
 	// evaluated on the input row. Otherwise, "false_filter" will be evaluated.
-	PredicateFilter *RowFilter `protobuf:"bytes,1,opt,name=predicate_filter" json:"predicate_filter,omitempty"`
+	PredicateFilter *RowFilter `protobuf:"bytes,1,opt,name=predicate_filter,json=predicateFilter" json:"predicate_filter,omitempty"`
 	// The filter to apply to the input row if "predicate_filter" returns any
 	// results. If not provided, no results will be returned in the true case.
-	TrueFilter *RowFilter `protobuf:"bytes,2,opt,name=true_filter" json:"true_filter,omitempty"`
+	TrueFilter *RowFilter `protobuf:"bytes,2,opt,name=true_filter,json=trueFilter" json:"true_filter,omitempty"`
 	// The filter to apply to the input row if "predicate_filter" does not
 	// return any results. If not provided, no results will be returned in the
 	// false case.
-	FalseFilter *RowFilter `protobuf:"bytes,3,opt,name=false_filter" json:"false_filter,omitempty"`
+	FalseFilter *RowFilter `protobuf:"bytes,3,opt,name=false_filter,json=falseFilter" json:"false_filter,omitempty"`
 }
 
 func (m *RowFilter_Condition) Reset()                    { *m = RowFilter_Condition{} }
 func (m *RowFilter_Condition) String() string            { return proto.CompactTextString(m) }
 func (*RowFilter_Condition) ProtoMessage()               {}
-func (*RowFilter_Condition) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8, 2} }
+func (*RowFilter_Condition) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 2} }
 
 func (m *RowFilter_Condition) GetPredicateFilter() *RowFilter {
 	if m != nil {
@@ -1315,23 +1340,23 @@
 func (m *Mutation) Reset()                    { *m = Mutation{} }
 func (m *Mutation) String() string            { return proto.CompactTextString(m) }
 func (*Mutation) ProtoMessage()               {}
-func (*Mutation) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
+func (*Mutation) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
 
 type isMutation_Mutation interface {
 	isMutation_Mutation()
 }
 
 type Mutation_SetCell_ struct {
-	SetCell *Mutation_SetCell `protobuf:"bytes,1,opt,name=set_cell,oneof"`
+	SetCell *Mutation_SetCell `protobuf:"bytes,1,opt,name=set_cell,json=setCell,oneof"`
 }
 type Mutation_DeleteFromColumn_ struct {
-	DeleteFromColumn *Mutation_DeleteFromColumn `protobuf:"bytes,2,opt,name=delete_from_column,oneof"`
+	DeleteFromColumn *Mutation_DeleteFromColumn `protobuf:"bytes,2,opt,name=delete_from_column,json=deleteFromColumn,oneof"`
 }
 type Mutation_DeleteFromFamily_ struct {
-	DeleteFromFamily *Mutation_DeleteFromFamily `protobuf:"bytes,3,opt,name=delete_from_family,oneof"`
+	DeleteFromFamily *Mutation_DeleteFromFamily `protobuf:"bytes,3,opt,name=delete_from_family,json=deleteFromFamily,oneof"`
 }
 type Mutation_DeleteFromRow_ struct {
-	DeleteFromRow *Mutation_DeleteFromRow `protobuf:"bytes,4,opt,name=delete_from_row,oneof"`
+	DeleteFromRow *Mutation_DeleteFromRow `protobuf:"bytes,4,opt,name=delete_from_row,json=deleteFromRow,oneof"`
 }
 
 func (*Mutation_SetCell_) isMutation_Mutation()          {}
@@ -1490,16 +1515,16 @@
 type Mutation_SetCell struct {
 	// The name of the family into which new data should be written.
 	// Must match [-_.a-zA-Z0-9]+
-	FamilyName string `protobuf:"bytes,1,opt,name=family_name" json:"family_name,omitempty"`
+	FamilyName string `protobuf:"bytes,1,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
 	// The qualifier of the column into which new data should be written.
 	// Can be any byte string, including the empty string.
-	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,proto3" json:"column_qualifier,omitempty"`
+	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,json=columnQualifier,proto3" json:"column_qualifier,omitempty"`
 	// The timestamp of the cell into which new data should be written.
 	// Use -1 for current Bigtable server time.
 	// Otherwise, the client should set this value itself, noting that the
 	// default value is a timestamp of zero if the field is left unspecified.
 	// Values must match the "granularity" of the table (e.g. micros, millis).
-	TimestampMicros int64 `protobuf:"varint,3,opt,name=timestamp_micros" json:"timestamp_micros,omitempty"`
+	TimestampMicros int64 `protobuf:"varint,3,opt,name=timestamp_micros,json=timestampMicros" json:"timestamp_micros,omitempty"`
 	// The value to be written into the specified cell.
 	Value []byte `protobuf:"bytes,4,opt,name=value,proto3" json:"value,omitempty"`
 }
@@ -1507,25 +1532,25 @@
 func (m *Mutation_SetCell) Reset()                    { *m = Mutation_SetCell{} }
 func (m *Mutation_SetCell) String() string            { return proto.CompactTextString(m) }
 func (*Mutation_SetCell) ProtoMessage()               {}
-func (*Mutation_SetCell) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 0} }
+func (*Mutation_SetCell) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10, 0} }
 
 // A Mutation which deletes cells from the specified column, optionally
 // restricting the deletions to a given timestamp range.
 type Mutation_DeleteFromColumn struct {
 	// The name of the family from which cells should be deleted.
 	// Must match [-_.a-zA-Z0-9]+
-	FamilyName string `protobuf:"bytes,1,opt,name=family_name" json:"family_name,omitempty"`
+	FamilyName string `protobuf:"bytes,1,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
 	// The qualifier of the column from which cells should be deleted.
 	// Can be any byte string, including the empty string.
-	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,proto3" json:"column_qualifier,omitempty"`
+	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,json=columnQualifier,proto3" json:"column_qualifier,omitempty"`
 	// The range of timestamps within which cells should be deleted.
-	TimeRange *TimestampRange `protobuf:"bytes,3,opt,name=time_range" json:"time_range,omitempty"`
+	TimeRange *TimestampRange `protobuf:"bytes,3,opt,name=time_range,json=timeRange" json:"time_range,omitempty"`
 }
 
 func (m *Mutation_DeleteFromColumn) Reset()                    { *m = Mutation_DeleteFromColumn{} }
 func (m *Mutation_DeleteFromColumn) String() string            { return proto.CompactTextString(m) }
 func (*Mutation_DeleteFromColumn) ProtoMessage()               {}
-func (*Mutation_DeleteFromColumn) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 1} }
+func (*Mutation_DeleteFromColumn) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10, 1} }
 
 func (m *Mutation_DeleteFromColumn) GetTimeRange() *TimestampRange {
 	if m != nil {
@@ -1538,13 +1563,13 @@
 type Mutation_DeleteFromFamily struct {
 	// The name of the family from which cells should be deleted.
 	// Must match [-_.a-zA-Z0-9]+
-	FamilyName string `protobuf:"bytes,1,opt,name=family_name" json:"family_name,omitempty"`
+	FamilyName string `protobuf:"bytes,1,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
 }
 
 func (m *Mutation_DeleteFromFamily) Reset()                    { *m = Mutation_DeleteFromFamily{} }
 func (m *Mutation_DeleteFromFamily) String() string            { return proto.CompactTextString(m) }
 func (*Mutation_DeleteFromFamily) ProtoMessage()               {}
-func (*Mutation_DeleteFromFamily) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 2} }
+func (*Mutation_DeleteFromFamily) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10, 2} }
 
 // A Mutation which deletes all cells from the containing row.
 type Mutation_DeleteFromRow struct {
@@ -1553,18 +1578,18 @@
 func (m *Mutation_DeleteFromRow) Reset()                    { *m = Mutation_DeleteFromRow{} }
 func (m *Mutation_DeleteFromRow) String() string            { return proto.CompactTextString(m) }
 func (*Mutation_DeleteFromRow) ProtoMessage()               {}
-func (*Mutation_DeleteFromRow) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9, 3} }
+func (*Mutation_DeleteFromRow) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10, 3} }
 
 // Specifies an atomic read/modify/write operation on the latest value of the
 // specified column.
 type ReadModifyWriteRule struct {
 	// The name of the family to which the read/modify/write should be applied.
 	// Must match [-_.a-zA-Z0-9]+
-	FamilyName string `protobuf:"bytes,1,opt,name=family_name" json:"family_name,omitempty"`
+	FamilyName string `protobuf:"bytes,1,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
 	// The qualifier of the column to which the read/modify/write should be
 	// applied.
 	// Can be any byte string, including the empty string.
-	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,proto3" json:"column_qualifier,omitempty"`
+	ColumnQualifier []byte `protobuf:"bytes,2,opt,name=column_qualifier,json=columnQualifier,proto3" json:"column_qualifier,omitempty"`
 	// The rule used to determine the column's new latest value from its current
 	// latest value.
 	//
@@ -1577,17 +1602,17 @@
 func (m *ReadModifyWriteRule) Reset()                    { *m = ReadModifyWriteRule{} }
 func (m *ReadModifyWriteRule) String() string            { return proto.CompactTextString(m) }
 func (*ReadModifyWriteRule) ProtoMessage()               {}
-func (*ReadModifyWriteRule) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
+func (*ReadModifyWriteRule) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
 
 type isReadModifyWriteRule_Rule interface {
 	isReadModifyWriteRule_Rule()
 }
 
 type ReadModifyWriteRule_AppendValue struct {
-	AppendValue []byte `protobuf:"bytes,3,opt,name=append_value,proto3,oneof"`
+	AppendValue []byte `protobuf:"bytes,3,opt,name=append_value,json=appendValue,proto3,oneof"`
 }
 type ReadModifyWriteRule_IncrementAmount struct {
-	IncrementAmount int64 `protobuf:"varint,4,opt,name=increment_amount,oneof"`
+	IncrementAmount int64 `protobuf:"varint,4,opt,name=increment_amount,json=incrementAmount,oneof"`
 }
 
 func (*ReadModifyWriteRule_AppendValue) isReadModifyWriteRule_Rule()     {}
@@ -1685,6 +1710,7 @@
 	proto.RegisterType((*Column)(nil), "google.bigtable.v1.Column")
 	proto.RegisterType((*Cell)(nil), "google.bigtable.v1.Cell")
 	proto.RegisterType((*RowRange)(nil), "google.bigtable.v1.RowRange")
+	proto.RegisterType((*RowSet)(nil), "google.bigtable.v1.RowSet")
 	proto.RegisterType((*ColumnRange)(nil), "google.bigtable.v1.ColumnRange")
 	proto.RegisterType((*TimestampRange)(nil), "google.bigtable.v1.TimestampRange")
 	proto.RegisterType((*ValueRange)(nil), "google.bigtable.v1.ValueRange")
@@ -1701,71 +1727,91 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 1053 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x56, 0xdf, 0x72, 0xdb, 0xd4,
-	0x13, 0xae, 0x2a, 0xc7, 0xb1, 0x57, 0x49, 0x9c, 0xc8, 0xfd, 0xe5, 0xe7, 0xba, 0x4d, 0x5a, 0xd4,
-	0x32, 0xc9, 0x00, 0x55, 0x06, 0x97, 0xa1, 0x5c, 0x64, 0x0a, 0xa3, 0x84, 0x4c, 0x18, 0xa6, 0x4c,
-	0x27, 0x30, 0x70, 0xa9, 0x39, 0xb6, 0x8f, 0xcd, 0x99, 0x1c, 0xe9, 0x18, 0x49, 0x4e, 0xeb, 0x3b,
-	0x1e, 0x87, 0x1b, 0x1e, 0x84, 0x17, 0xe0, 0x96, 0x57, 0x61, 0xcf, 0x1f, 0xc9, 0x96, 0xa3, 0x1a,
-	0xe7, 0x2a, 0xb1, 0xf6, 0xdb, 0x6f, 0x77, 0xbf, 0xb3, 0xbb, 0xe7, 0xc0, 0xe5, 0x58, 0x88, 0x31,
-	0xa7, 0xfe, 0x58, 0x70, 0x12, 0x8f, 0x7d, 0x91, 0x8c, 0x4f, 0x06, 0x5c, 0x4c, 0x87, 0x27, 0x7d,
-	0x36, 0xce, 0x48, 0x9f, 0xd3, 0x13, 0x16, 0x67, 0x34, 0x89, 0x09, 0x3f, 0x19, 0x92, 0x8c, 0x84,
-	0x93, 0x44, 0x64, 0xa2, 0x30, 0x86, 0xf2, 0x9b, 0xaf, 0xbe, 0xb9, 0xae, 0x61, 0xca, 0x6d, 0xfe,
-	0xcd, 0xe7, 0xde, 0x37, 0x60, 0x5f, 0x89, 0x77, 0xae, 0x03, 0xf6, 0x35, 0x9d, 0x75, 0xac, 0xa7,
-	0xd6, 0xf1, 0x96, 0xfb, 0x19, 0x34, 0x46, 0x24, 0x62, 0x9c, 0xd1, 0xb4, 0x73, 0xff, 0xa9, 0x7d,
-	0xec, 0xf4, 0xba, 0xfe, 0x6d, 0x57, 0xff, 0x42, 0x62, 0x66, 0xde, 0x19, 0xd4, 0xf5, 0x7f, 0xee,
-	0x16, 0xd4, 0x62, 0x12, 0x51, 0xc5, 0xd2, 0x74, 0x3f, 0x85, 0xcd, 0x81, 0xe0, 0xd3, 0x28, 0x5e,
-	0x49, 0x72, 0xa6, 0x20, 0xde, 0x39, 0xd4, 0xf5, 0x7f, 0xee, 0x1e, 0x34, 0x7f, 0x9b, 0x12, 0xce,
-	0x46, 0x8c, 0x26, 0x26, 0x9f, 0x23, 0xd8, 0x18, 0x50, 0xce, 0x73, 0x9e, 0x4e, 0x25, 0x0f, 0x02,
-	0xbc, 0xaf, 0xa1, 0x26, 0xff, 0xba, 0x1d, 0xd8, 0xcd, 0x58, 0x44, 0xd3, 0x8c, 0x44, 0x93, 0x30,
-	0x62, 0x83, 0x44, 0xa4, 0x8a, 0xca, 0x76, 0xb7, 0x61, 0xe3, 0x86, 0xf0, 0x29, 0x45, 0x2a, 0xc9,
-	0xbc, 0x03, 0x75, 0x4e, 0xfa, 0x14, 0xa9, 0x6d, 0xa4, 0x6e, 0x7a, 0x3e, 0x34, 0x50, 0x8d, 0x2b,
-	0xd4, 0x99, 0xca, 0x44, 0x90, 0x20, 0xc9, 0x42, 0x29, 0x8c, 0x86, 0xb7, 0x60, 0x93, 0xc6, 0x43,
-	0xf5, 0xc1, 0x96, 0x1f, 0xbc, 0xbf, 0x2d, 0x70, 0x74, 0xde, 0xda, 0xa7, 0x0d, 0x8e, 0x52, 0x6e,
-	0x16, 0x2e, 0x08, 0xf1, 0x0c, 0x1e, 0x6a, 0xa2, 0xa2, 0xae, 0x90, 0xc5, 0x03, 0x3e, 0x4d, 0xd9,
-	0x8d, 0xc9, 0xe3, 0xf2, 0x5e, 0x15, 0x88, 0xbe, 0xcf, 0x41, 0xb6, 0x01, 0x7d, 0x04, 0xff, 0x97,
-	0xf1, 0xab, 0x78, 0x6a, 0x0a, 0x62, 0xdd, 0x86, 0xcc, 0x59, 0x36, 0x34, 0x24, 0xd8, 0x83, 0xd6,
-	0x52, 0xa8, 0xa0, 0x05, 0xdb, 0x25, 0x2f, 0xef, 0x07, 0xd8, 0xf9, 0x29, 0x57, 0x50, 0x97, 0x76,
-	0x08, 0xfb, 0xda, 0xeb, 0x03, 0xca, 0x3e, 0x86, 0x07, 0x92, 0xe2, 0x96, 0x55, 0x16, 0x68, 0x7b,
-	0x7f, 0x58, 0x00, 0x3f, 0x4b, 0xe1, 0x35, 0xd9, 0x13, 0xf8, 0x9f, 0x26, 0x53, 0x87, 0xb1, 0x50,
-	0x86, 0x65, 0x2a, 0x5d, 0x02, 0xcc, 0x8b, 0xc8, 0xf5, 0x3a, 0x80, 0xb6, 0x0c, 0xb7, 0xec, 0x6f,
-	0x1b, 0x19, 0x4a, 0xe6, 0xb9, 0xb7, 0x51, 0x29, 0xd8, 0x06, 0x67, 0x81, 0x3e, 0x70, 0xa0, 0x59,
-	0xa0, 0xbd, 0x7f, 0x1a, 0xd0, 0xc4, 0x26, 0xb8, 0x60, 0x1c, 0xa7, 0xca, 0xfd, 0x02, 0x7b, 0xef,
-	0x57, 0xc2, 0x62, 0x95, 0x99, 0xd3, 0x7b, 0x56, 0xd5, 0x7b, 0x05, 0xda, 0x3f, 0x93, 0x50, 0xcc,
-	0xee, 0x35, 0x80, 0x1a, 0x4a, 0x4e, 0x89, 0xc9, 0xd9, 0xe9, 0x1d, 0xaf, 0x76, 0xfd, 0xae, 0xc0,
-	0xa3, 0xff, 0x29, 0x34, 0x07, 0x22, 0x1e, 0xb2, 0x8c, 0x89, 0x58, 0xd5, 0xe4, 0xf4, 0x8e, 0xfe,
-	0x23, 0x72, 0x0e, 0x47, 0xef, 0x1d, 0xa8, 0xa5, 0x2c, 0xbe, 0xee, 0xec, 0xa2, 0x63, 0x03, 0x7f,
-	0x3f, 0x84, 0xd6, 0x84, 0xa4, 0x69, 0x48, 0x38, 0x0f, 0x47, 0x0a, 0xde, 0xd9, 0x33, 0xa6, 0x2e,
-	0xec, 0xf6, 0xb9, 0x18, 0x5c, 0x2f, 0xda, 0x5c, 0x63, 0x3b, 0x84, 0x07, 0x89, 0x78, 0x27, 0xbb,
-	0x3d, 0x4c, 0xe8, 0x98, 0xbe, 0xcf, 0xed, 0x35, 0x73, 0x04, 0x8f, 0x60, 0x4f, 0xda, 0x53, 0x3c,
-	0x6c, 0xdc, 0x33, 0xc6, 0xb8, 0x83, 0x46, 0x0b, 0x8d, 0x1e, 0x74, 0x16, 0x26, 0xa1, 0x4c, 0x20,
-	0x1b, 0xb1, 0x89, 0x98, 0x23, 0x38, 0xd0, 0x1b, 0x62, 0xa1, 0x5d, 0x4b, 0xc0, 0xba, 0x89, 0xf4,
-	0x1a, 0xda, 0x06, 0x98, 0xc8, 0xf6, 0xc9, 0xcd, 0x9b, 0x4a, 0x98, 0x27, 0x1f, 0x5e, 0x2b, 0xaa,
-	0xd9, 0xd0, 0xff, 0x1c, 0xf6, 0xe7, 0x7d, 0x59, 0xa2, 0x68, 0x28, 0x0a, 0xaf, 0x8a, 0xa2, 0xdc,
-	0xff, 0xc8, 0xf2, 0x18, 0x5c, 0xdd, 0x4f, 0xa5, 0x1c, 0x9b, 0x26, 0xc7, 0xd3, 0xc2, 0xba, 0xc8,
-	0xdf, 0x52, 0xfc, 0x87, 0x55, 0xfc, 0xf3, 0x71, 0x40, 0xef, 0x8f, 0xe1, 0x91, 0x5a, 0x71, 0xe1,
-	0x44, 0x6a, 0x80, 0xaa, 0x8a, 0xd1, 0x28, 0xa5, 0x59, 0x4e, 0x03, 0x48, 0xb3, 0x81, 0xb0, 0xe7,
-	0xd0, 0x2d, 0xc3, 0x38, 0x8b, 0x58, 0x81, 0x72, 0x0c, 0x4a, 0xea, 0x5a, 0xa0, 0x8c, 0x70, 0x25,
-	0xe0, 0x96, 0x01, 0xe2, 0xb2, 0x48, 0xb3, 0x84, 0x4d, 0xcc, 0x9c, 0x64, 0x98, 0x7a, 0x3a, 0x12,
-	0x49, 0x84, 0x90, 0x6d, 0xd3, 0x04, 0x08, 0x21, 0x93, 0x09, 0x1e, 0xa3, 0xda, 0x93, 0x25, 0x48,
-	0x5b, 0x1f, 0x63, 0xf7, 0x15, 0x6c, 0xa8, 0xbe, 0x77, 0x7d, 0xd8, 0xd4, 0x01, 0xe4, 0x4e, 0x90,
-	0x9b, 0xfa, 0x60, 0x65, 0xcf, 0x76, 0x4f, 0x01, 0xe6, 0x5d, 0x7f, 0x67, 0xef, 0x3f, 0x2d, 0x68,
-	0x16, 0x5d, 0xef, 0xbe, 0x82, 0xdd, 0x49, 0x42, 0x87, 0x6c, 0x40, 0xb2, 0x42, 0x7c, 0x3d, 0xb2,
-	0xab, 0x69, 0xdc, 0x1e, 0x38, 0x59, 0x32, 0x2d, 0x7c, 0xee, 0xaf, 0xe3, 0xf3, 0x12, 0xb6, 0x46,
-	0x84, 0xa7, 0x85, 0x93, 0xbd, 0x86, 0x53, 0xd0, 0x80, 0xba, 0x86, 0x7b, 0x7f, 0xd5, 0xa0, 0xf1,
-	0x66, 0x9a, 0x11, 0x95, 0xf8, 0x57, 0xd0, 0x90, 0x07, 0x2d, 0x0f, 0xcc, 0x24, 0xfc, 0xbc, 0x8a,
-	0x27, 0xc7, 0xfb, 0x3f, 0xd2, 0x4c, 0xde, 0x71, 0x78, 0x34, 0xdf, 0x83, 0x3b, 0xa4, 0x9c, 0xca,
-	0x7a, 0x13, 0x11, 0x99, 0x83, 0x36, 0x05, 0xbc, 0x58, 0xc9, 0x71, 0xae, 0xdc, 0x2e, 0xd0, 0x4b,
-	0xcf, 0xc9, 0x6d, 0x32, 0x3d, 0xbb, 0xa6, 0xb0, 0x75, 0xc9, 0xf4, 0x33, 0x00, 0xc9, 0xbe, 0x85,
-	0xd6, 0x22, 0x19, 0x36, 0xaa, 0x5a, 0x1a, 0x4e, 0xef, 0x93, 0x35, 0x99, 0x50, 0x35, 0x6c, 0x2c,
-	0x02, 0x9b, 0xa6, 0xda, 0xea, 0x8b, 0x15, 0xaf, 0xf9, 0xe5, 0xfd, 0x61, 0x2e, 0xea, 0xaa, 0x07,
-	0x80, 0x5d, 0x7e, 0x00, 0xa8, 0x2d, 0xd6, 0x9d, 0xc1, 0xee, 0xb2, 0x18, 0x77, 0x8d, 0xf5, 0x25,
-	0x80, 0x8c, 0xa5, 0xe7, 0xde, 0x28, 0xb6, 0xc6, 0x42, 0xe9, 0x1e, 0x2d, 0x86, 0x36, 0x2f, 0xa8,
-	0xaa, 0xd0, 0x5d, 0xbc, 0x9c, 0x4b, 0xca, 0x04, 0x00, 0x8d, 0xc8, 0x68, 0xe6, 0xfd, 0x6e, 0x41,
-	0xfb, 0x8a, 0x92, 0xe1, 0x1b, 0x31, 0x64, 0xa3, 0xd9, 0x2f, 0x09, 0xcb, 0xe8, 0xd5, 0x94, 0xd3,
-	0xbb, 0x16, 0xb1, 0x0f, 0x5b, 0x38, 0xe6, 0xc5, 0x25, 0x58, 0xbc, 0x38, 0xf0, 0x7e, 0xc0, 0xcb,
-	0x35, 0xa1, 0x11, 0x8d, 0xb3, 0x90, 0x44, 0x62, 0x1a, 0x67, 0x4a, 0x39, 0xfb, 0xf2, 0x5e, 0x50,
-	0x87, 0x5a, 0x82, 0xa1, 0x82, 0x17, 0xb0, 0x3f, 0x10, 0x51, 0x45, 0xc5, 0xc1, 0x5e, 0x60, 0x7e,
-	0x9c, 0xe3, 0x23, 0xf4, 0xad, 0x7c, 0x83, 0xbe, 0xb5, 0xfa, 0x75, 0xf5, 0x18, 0x7d, 0xf9, 0x6f,
-	0x00, 0x00, 0x00, 0xff, 0xff, 0x98, 0x49, 0x33, 0xa8, 0xd8, 0x0a, 0x00, 0x00,
+	// 1368 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x57, 0xdb, 0x8e, 0x13, 0x47,
+	0x13, 0xf6, 0xac, 0xcf, 0x35, 0xbb, 0xd8, 0xdb, 0x7b, 0x32, 0x06, 0x7e, 0x56, 0xe6, 0x97, 0x62,
+	0x48, 0xf0, 0xc2, 0x82, 0x12, 0x22, 0x50, 0xc4, 0x9a, 0x43, 0x4c, 0x38, 0x37, 0x2b, 0x22, 0x45,
+	0x8a, 0x26, 0xbd, 0x9e, 0xb6, 0x33, 0xa2, 0x67, 0xda, 0xe9, 0x19, 0x63, 0xfc, 0x22, 0xb9, 0xcf,
+	0x73, 0xe4, 0x2e, 0x2f, 0x91, 0xd7, 0xc8, 0x65, 0x2e, 0x72, 0x11, 0xf5, 0x61, 0x4e, 0x5e, 0xb3,
+	0xbb, 0x8a, 0xb8, 0xf3, 0x54, 0x7d, 0xdf, 0x57, 0xd5, 0xd5, 0xd5, 0xd5, 0x6d, 0x18, 0x8c, 0x39,
+	0x1f, 0x33, 0xda, 0x1b, 0x73, 0x46, 0x82, 0x71, 0x8f, 0x8b, 0xf1, 0xde, 0x90, 0xf1, 0xa9, 0xbb,
+	0x77, 0xe4, 0x8d, 0x23, 0x72, 0xc4, 0xe8, 0x9e, 0x17, 0x44, 0x54, 0x04, 0x84, 0xed, 0xb9, 0x24,
+	0x22, 0xce, 0x44, 0xf0, 0x88, 0x27, 0x4e, 0x47, 0xda, 0x7a, 0xca, 0x86, 0x90, 0x51, 0x8a, 0x7d,
+	0xbd, 0xf7, 0x37, 0x3b, 0x2f, 0xa1, 0x88, 0xf9, 0x0c, 0x35, 0xa1, 0xf8, 0x8e, 0xce, 0x5b, 0xd6,
+	0xae, 0xd5, 0x5d, 0xc5, 0xf2, 0x27, 0xfa, 0x12, 0x6a, 0x23, 0xe2, 0x7b, 0xcc, 0xa3, 0x61, 0x6b,
+	0x65, 0xb7, 0xd8, 0xb5, 0xf7, 0xdb, 0xbd, 0xe3, 0xfc, 0xde, 0x63, 0x89, 0x99, 0xe3, 0x04, 0xdb,
+	0xc1, 0x50, 0xd1, 0x36, 0x84, 0xa0, 0x14, 0x10, 0x9f, 0x2a, 0xd1, 0x3a, 0x56, 0xbf, 0xd1, 0x6d,
+	0xa8, 0x0e, 0x39, 0x9b, 0xfa, 0xc1, 0x89, 0xa2, 0x0f, 0x14, 0x04, 0xc7, 0xd0, 0xce, 0x5b, 0xa8,
+	0x68, 0x13, 0xba, 0x08, 0xf5, 0x5f, 0xa6, 0x84, 0x79, 0x23, 0x8f, 0x0a, 0x93, 0x6d, 0x6a, 0x40,
+	0x3d, 0x28, 0x0f, 0x29, 0x63, 0xb1, 0x76, 0x6b, 0xa9, 0x36, 0x65, 0x0c, 0x6b, 0x58, 0xc7, 0x81,
+	0x92, 0xfc, 0x44, 0x57, 0xa1, 0x19, 0x79, 0x3e, 0x0d, 0x23, 0xe2, 0x4f, 0x1c, 0xdf, 0x1b, 0x0a,
+	0x1e, 0x2a, 0xf1, 0x22, 0x6e, 0x24, 0xf6, 0xe7, 0xca, 0x8c, 0x36, 0xa1, 0xfc, 0x9e, 0xb0, 0x29,
+	0x6d, 0xad, 0xa8, 0xe0, 0xfa, 0x03, 0x6d, 0x43, 0x85, 0x91, 0x23, 0xca, 0xc2, 0x56, 0x71, 0xb7,
+	0xd8, 0xad, 0x63, 0xf3, 0xd5, 0xb9, 0x0f, 0x35, 0xcc, 0x67, 0x98, 0x04, 0x63, 0x8a, 0x2e, 0x40,
+	0x3d, 0x8c, 0x88, 0x88, 0x1c, 0x59, 0x68, 0xcd, 0xae, 0x29, 0xc3, 0x53, 0x3a, 0x47, 0x3b, 0x50,
+	0xa5, 0x81, 0xab, 0x5c, 0x45, 0xe5, 0xaa, 0xd0, 0xc0, 0x7d, 0x4a, 0xe7, 0x9d, 0x9f, 0xa0, 0x82,
+	0xf9, 0xec, 0x0d, 0x8d, 0xd0, 0x79, 0xa8, 0x09, 0x3e, 0x93, 0x10, 0x99, 0x5c, 0xb1, 0xbb, 0x8a,
+	0xab, 0x82, 0xcf, 0x9e, 0xd2, 0x79, 0x88, 0xee, 0x02, 0x48, 0x97, 0x90, 0x71, 0xe2, 0xc5, 0x5f,
+	0x5c, 0xb6, 0xf8, 0x38, 0x19, 0x5c, 0x17, 0xe6, 0x57, 0xd8, 0xf9, 0x63, 0x05, 0x6c, 0x53, 0x70,
+	0x95, 0xe7, 0x65, 0xb0, 0xd5, 0x66, 0xce, 0x9d, 0xcc, 0xee, 0x81, 0x36, 0xbd, 0x90, 0x7b, 0x78,
+	0x0f, 0xce, 0xeb, 0x85, 0x24, 0x85, 0x77, 0xbc, 0x60, 0xc8, 0xa6, 0xa1, 0xf7, 0xde, 0x94, 0x65,
+	0x50, 0xc0, 0x3b, 0x0a, 0xf2, 0x3a, 0x46, 0x3c, 0x89, 0x01, 0xcb, 0xd8, 0xf4, 0x43, 0xcc, 0x2e,
+	0x2e, 0x67, 0x3f, 0x8a, 0x01, 0xe8, 0x0e, 0xec, 0xc8, 0x3a, 0x2d, 0x8b, 0x5c, 0x52, 0x5c, 0x0b,
+	0x6f, 0xd1, 0xc0, 0x5d, 0x12, 0xf7, 0x18, 0x33, 0x8d, 0x5a, 0x5e, 0xc6, 0x4c, 0x62, 0xf6, 0xd7,
+	0xa1, 0xb1, 0x90, 0x71, 0xbf, 0x01, 0x6b, 0x39, 0xb1, 0xce, 0x07, 0x38, 0x77, 0x18, 0x77, 0x8a,
+	0x2e, 0xe3, 0x6d, 0xd8, 0xd6, 0xac, 0x8f, 0x74, 0xd6, 0xa6, 0xf2, 0x1e, 0x2e, 0xb4, 0xd7, 0x0d,
+	0xd8, 0x94, 0xc2, 0xc7, 0x38, 0x2b, 0x8a, 0x83, 0x68, 0xe0, 0x2e, 0x30, 0x3a, 0x7f, 0x5b, 0x00,
+	0x6f, 0x65, 0x13, 0xc6, 0x61, 0xb7, 0x74, 0x58, 0xd5, 0x98, 0x99, 0xf2, 0x58, 0xa6, 0xb4, 0x1b,
+	0xca, 0xad, 0x18, 0x69, 0x71, 0x16, 0x58, 0x69, 0x69, 0x56, 0x8e, 0xb3, 0xd2, 0xcd, 0xb8, 0x01,
+	0x1b, 0x32, 0xd9, 0xc5, 0x48, 0x45, 0x53, 0xce, 0x75, 0x1a, 0xb8, 0x0b, 0x71, 0x72, 0x8c, 0x34,
+	0x4a, 0x69, 0x91, 0x91, 0x16, 0x7f, 0x0d, 0xec, 0x4c, 0x66, 0x7d, 0x1b, 0xea, 0x89, 0x40, 0xe7,
+	0x1f, 0x1b, 0xea, 0x98, 0xcf, 0x1e, 0x7b, 0x2c, 0xa2, 0x02, 0xdd, 0x85, 0xf2, 0xf0, 0x67, 0xe2,
+	0x05, 0x6a, 0xa5, 0xf6, 0xfe, 0x95, 0x8f, 0xf4, 0xbf, 0x46, 0xf7, 0x1e, 0x48, 0xe8, 0xa0, 0x80,
+	0x35, 0x07, 0x7d, 0x07, 0xa0, 0xa6, 0x28, 0xa3, 0xc4, 0xac, 0xda, 0xde, 0xef, 0x9e, 0xac, 0xf0,
+	0x24, 0xc1, 0x0f, 0x0a, 0x38, 0xc3, 0x46, 0xdf, 0x42, 0x7d, 0xc8, 0x03, 0xd7, 0x8b, 0x3c, 0x1e,
+	0xa8, 0x62, 0xd8, 0xfb, 0x9f, 0x9d, 0x92, 0x4c, 0x0c, 0x1f, 0x14, 0x70, 0xca, 0x45, 0x9b, 0x50,
+	0x0a, 0xbd, 0xe0, 0x5d, 0xab, 0xb9, 0x6b, 0x75, 0x6b, 0x83, 0x02, 0x56, 0x5f, 0xa8, 0x0b, 0x8d,
+	0x09, 0x09, 0x43, 0x87, 0x30, 0xe6, 0x8c, 0x14, 0xbf, 0xb5, 0x6e, 0x00, 0x6b, 0xd2, 0x71, 0xc0,
+	0x98, 0xa9, 0xc8, 0x35, 0x68, 0x1e, 0x31, 0x3e, 0x7c, 0x97, 0x85, 0x22, 0x03, 0x3d, 0xa7, 0x3c,
+	0x29, 0xf6, 0x26, 0x6c, 0x9a, 0xe9, 0xe2, 0x08, 0x3a, 0xa6, 0x1f, 0x62, 0x7c, 0xc9, 0x34, 0xc0,
+	0xba, 0x9e, 0x35, 0x58, 0xfa, 0x0c, 0xe5, 0x0b, 0x90, 0x46, 0x27, 0x24, 0xfe, 0x84, 0xd1, 0x18,
+	0x7f, 0x6e, 0xd7, 0xea, 0x5a, 0x83, 0x02, 0x6e, 0x08, 0x3e, 0x7b, 0xa3, 0x3c, 0x06, 0xfd, 0x35,
+	0xb4, 0x32, 0x63, 0x25, 0x1f, 0x44, 0x1e, 0xc0, 0xfa, 0xa0, 0x80, 0xb7, 0xd2, 0x29, 0x93, 0x0d,
+	0xf4, 0x00, 0x2e, 0xe9, 0x9b, 0x20, 0x73, 0x7a, 0x73, 0xfc, 0x8a, 0x49, 0xb2, 0xad, 0x61, 0xc9,
+	0x19, 0xce, 0x8a, 0xbc, 0x86, 0x0d, 0x23, 0xa2, 0xc6, 0x64, 0x4c, 0xad, 0xaa, 0xfd, 0xb9, 0x7c,
+	0xc2, 0x2d, 0x24, 0xd1, 0xb2, 0x00, 0xc3, 0xf4, 0xd3, 0x48, 0xfe, 0x00, 0xdb, 0xe9, 0x41, 0xcd,
+	0xa9, 0xd6, 0x94, 0x6a, 0x67, 0x99, 0x6a, 0x7e, 0x4c, 0x0c, 0x0a, 0x78, 0x33, 0xca, 0x59, 0x8c,
+	0x76, 0x0f, 0x90, 0x3e, 0x25, 0xb9, 0x85, 0xd6, 0xcd, 0x42, 0x9b, 0xca, 0x97, 0x5d, 0xde, 0x8b,
+	0x04, 0x9f, 0xcd, 0xa3, 0xa1, 0xf2, 0xf8, 0xdf, 0xb2, 0x3c, 0xd2, 0x99, 0x91, 0xea, 0x65, 0xe2,
+	0x7f, 0x03, 0x17, 0xd4, 0x1d, 0xe9, 0x4c, 0x64, 0xb1, 0xf9, 0xcc, 0xe1, 0xa3, 0x51, 0x48, 0xa3,
+	0x58, 0x18, 0x76, 0xad, 0x6e, 0x59, 0x0e, 0x6a, 0x05, 0x7a, 0x45, 0x05, 0xe6, 0xb3, 0x97, 0x0a,
+	0x61, 0xf8, 0xf7, 0xa0, 0x9d, 0xe7, 0x33, 0xcf, 0xf7, 0x12, 0xba, 0x6d, 0xe8, 0xdb, 0x19, 0xfa,
+	0x33, 0x09, 0x30, 0xec, 0x3e, 0x5c, 0x4a, 0xd9, 0x66, 0xdb, 0x72, 0x02, 0xab, 0x46, 0xe0, 0x7c,
+	0x2c, 0xa0, 0x37, 0x2b, 0xab, 0x71, 0x07, 0x76, 0xc2, 0x48, 0x78, 0x13, 0x33, 0x6d, 0x22, 0x41,
+	0x82, 0x70, 0xc4, 0x85, 0x4f, 0x45, 0x6b, 0xcd, 0x1c, 0x82, 0x2d, 0x05, 0x50, 0x95, 0x38, 0x4c,
+	0xdd, 0x92, 0x49, 0x26, 0x13, 0x36, 0x77, 0xd4, 0x2d, 0x9e, 0x63, 0x6e, 0xc4, 0x9d, 0xaa, 0x00,
+	0xcf, 0xa4, 0x3f, 0xc3, 0x6c, 0xdf, 0x87, 0xb2, 0x1a, 0x2c, 0xe8, 0x2b, 0xa8, 0xea, 0x4c, 0xf5,
+	0x5d, 0x6d, 0xef, 0x5f, 0x3a, 0x71, 0x02, 0xe0, 0x18, 0xdd, 0x7e, 0x04, 0x90, 0x0e, 0x96, 0xff,
+	0x2e, 0xf3, 0xa7, 0x05, 0xf5, 0x64, 0xaa, 0xa0, 0x01, 0x34, 0x27, 0x82, 0xba, 0xde, 0x90, 0x44,
+	0x49, 0x6b, 0xe8, 0x29, 0x79, 0x8a, 0x5e, 0x23, 0xa1, 0x25, 0x6d, 0x61, 0x47, 0x62, 0x9a, 0x88,
+	0xac, 0x9c, 0x45, 0x04, 0x24, 0xc3, 0xf0, 0xef, 0xc3, 0xea, 0x88, 0xb0, 0x30, 0x11, 0x28, 0x9e,
+	0x45, 0xc0, 0x56, 0x14, 0xfd, 0xd1, 0xaf, 0x41, 0x45, 0x73, 0x3b, 0x7f, 0x95, 0xa1, 0xf6, 0x7c,
+	0x1a, 0x11, 0xb5, 0xc4, 0x03, 0xa8, 0xc9, 0xf6, 0x94, 0xed, 0x60, 0x96, 0xf6, 0xff, 0x65, 0xa2,
+	0x31, 0xbe, 0xf7, 0x86, 0x46, 0xf2, 0xe9, 0x37, 0x28, 0xe0, 0x6a, 0xa8, 0x7f, 0xa2, 0x1f, 0x01,
+	0xb9, 0x94, 0x51, 0x59, 0x22, 0xc1, 0x7d, 0xd3, 0x76, 0x66, 0x89, 0xd7, 0x4f, 0x14, 0x7b, 0xa8,
+	0x68, 0x8f, 0x05, 0xf7, 0x75, 0x1b, 0xca, 0x13, 0xe5, 0x2e, 0xd8, 0x16, 0xe5, 0xf5, 0xa8, 0x33,
+	0x05, 0x38, 0xab, 0xbc, 0x7e, 0x59, 0xe7, 0xe5, 0xcd, 0x6b, 0xfb, 0x10, 0x1a, 0x59, 0x79, 0xc1,
+	0x67, 0x6a, 0x76, 0xdb, 0xfb, 0xd7, 0xce, 0xa8, 0x8d, 0xf9, 0x4c, 0x5e, 0x21, 0x6e, 0xd6, 0xd0,
+	0xfe, 0xd5, 0x82, 0xaa, 0x29, 0xd5, 0xe9, 0x0f, 0xc3, 0xab, 0xd0, 0x5c, 0x9c, 0xd3, 0xe6, 0xa1,
+	0xdb, 0x58, 0x18, 0xcc, 0x4b, 0x5f, 0xdc, 0xc5, 0x53, 0x5e, 0xdc, 0xa5, 0xcc, 0x8b, 0xbb, 0xfd,
+	0x9b, 0x05, 0xcd, 0xc5, 0xb2, 0x7f, 0xd2, 0x0c, 0x0f, 0x00, 0x64, 0x26, 0x7a, 0x9e, 0x9a, 0x6d,
+	0x3a, 0xc3, 0x40, 0xc7, 0x75, 0xc9, 0x52, 0x3f, 0xdb, 0xb7, 0xb2, 0x29, 0x9a, 0x6d, 0x3a, 0x2d,
+	0xc5, 0x76, 0x03, 0xd6, 0x72, 0x7b, 0xd2, 0x07, 0xa8, 0xf9, 0x66, 0xb7, 0x3a, 0xbf, 0x5b, 0xb0,
+	0x81, 0x29, 0x71, 0x9f, 0x73, 0xd7, 0x1b, 0xcd, 0xbf, 0x17, 0x5e, 0x44, 0xf1, 0x94, 0xd1, 0x4f,
+	0xba, 0xf0, 0x2b, 0xb0, 0x4a, 0x26, 0x93, 0xe4, 0x95, 0x95, 0xbc, 0xc9, 0x6d, 0x6d, 0x55, 0xd3,
+	0x12, 0x7d, 0x0e, 0x4d, 0x2f, 0x18, 0x0a, 0xea, 0xd3, 0x20, 0x72, 0x88, 0xcf, 0xa7, 0x41, 0xa4,
+	0xf6, 0xa7, 0x28, 0xaf, 0xfe, 0xc4, 0x73, 0xa0, 0x1c, 0xfd, 0x0a, 0x94, 0xc4, 0x94, 0xd1, 0xfe,
+	0x75, 0xd8, 0x1e, 0x72, 0x7f, 0x49, 0x0d, 0xfb, 0xeb, 0x7d, 0xf3, 0xf1, 0x90, 0x44, 0xe4, 0x95,
+	0xfc, 0xb3, 0xfa, 0xca, 0x3a, 0xaa, 0xa8, 0x7f, 0xad, 0xb7, 0xfe, 0x0d, 0x00, 0x00, 0xff, 0xff,
+	0x09, 0x4e, 0x55, 0x14, 0x01, 0x0f, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto b/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto
index 86234d2..290eb91 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto
@@ -89,6 +89,15 @@
   bytes end_key = 3;
 }
 
+// Specifies a non-contiguous set of rows.
+message RowSet {
+  // Single rows included in the set.
+  repeated bytes row_keys = 1;
+
+  // Contiguous row ranges included in the set.
+  repeated RowRange row_ranges = 2;
+}
+
 // Specifies a contiguous range of columns within a single column family.
 // The range spans from <column_family>:<start_qualifier> to
 // <column_family>:<end_qualifier>, where both bounds can be either inclusive or
@@ -374,15 +383,21 @@
     ValueRange value_range_filter = 15;
 
     // Skips the first N cells of each row, matching all subsequent cells.
+    // If duplicate cells are present, as is possible when using an Interleave,
+    // each copy of the cell is counted separately.
     int32 cells_per_row_offset_filter = 10;
 
     // Matches only the first N cells of each row.
+    // If duplicate cells are present, as is possible when using an Interleave,
+    // each copy of the cell is counted separately.
     int32 cells_per_row_limit_filter = 11;
 
     // Matches only the most recent N cells within each column. For example,
     // if N=2, this filter would match column "foo:bar" at timestamps 10 and 9,
     // skip all earlier cells in "foo:bar", and then begin matching again in
     // column "foo:bar2".
+    // If duplicate cells are present, as is possible when using an Interleave,
+    // each copy of the cell is counted separately.
     int32 cells_per_column_limit_filter = 12;
 
     // Replaces each cell's value with the empty string.
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.pb.go
deleted file mode 100644
index 8bc9294..0000000
--- a/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.pb.go
+++ /dev/null
@@ -1,100 +0,0 @@
-// Code generated by protoc-gen-go.
-// source: google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto
-// DO NOT EDIT!
-
-/*
-Package google_protobuf is a generated protocol buffer package.
-
-It is generated from these files:
-	google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto
-
-It has these top-level messages:
-	Duration
-*/
-package google_protobuf
-
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ = proto.Marshal
-var _ = fmt.Errorf
-var _ = math.Inf
-
-// A Duration represents a signed, fixed-length span of time represented
-// as a count of seconds and fractions of seconds at nanosecond
-// resolution. It is independent of any calendar and concepts like "day"
-// or "month". It is related to Timestamp in that the difference between
-// two Timestamp values is a Duration and it can be added or subtracted
-// from a Timestamp. Range is approximately +-10,000 years.
-//
-// Example 1: Compute Duration from two Timestamps in pseudo code.
-//
-//     Timestamp start = ...;
-//     Timestamp end = ...;
-//     Duration duration = ...;
-//
-//     duration.seconds = end.seconds - start.seconds;
-//     duration.nanos = end.nanos - start.nanos;
-//
-//     if (duration.seconds < 0 && duration.nanos > 0) {
-//       duration.seconds += 1;
-//       duration.nanos -= 1000000000;
-//     } else if (durations.seconds > 0 && duration.nanos < 0) {
-//       duration.seconds -= 1;
-//       duration.nanos += 1000000000;
-//     }
-//
-// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
-//
-//     Timestamp start = ...;
-//     Duration duration = ...;
-//     Timestamp end = ...;
-//
-//     end.seconds = start.seconds + duration.seconds;
-//     end.nanos = start.nanos + duration.nanos;
-//
-//     if (end.nanos < 0) {
-//       end.seconds -= 1;
-//       end.nanos += 1000000000;
-//     } else if (end.nanos >= 1000000000) {
-//       end.seconds += 1;
-//       end.nanos -= 1000000000;
-//     }
-//
-type Duration struct {
-	// Signed seconds of the span of time. Must be from -315,576,000,000
-	// to +315,576,000,000 inclusive.
-	Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
-	// Signed fractions of a second at nanosecond resolution of the span
-	// of time. Durations less than one second are represented with a 0
-	// `seconds` field and a positive or negative `nanos` field. For durations
-	// of one second or more, a non-zero value for the `nanos` field must be
-	// of the same sign as the `seconds` field. Must be from -999,999,999
-	// to +999,999,999 inclusive.
-	Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
-}
-
-func (m *Duration) Reset()                    { *m = Duration{} }
-func (m *Duration) String() string            { return proto.CompactTextString(m) }
-func (*Duration) ProtoMessage()               {}
-func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
-
-func init() {
-	proto.RegisterType((*Duration)(nil), "google.protobuf.Duration")
-}
-
-var fileDescriptor0 = []byte{
-	// 160 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x72, 0x4f, 0xcf, 0xcf, 0x4f,
-	0xcf, 0x49, 0xd5, 0x4b, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0xcb, 0x2f, 0x4a, 0xd7, 0x4f, 0xce,
-	0xc9, 0x2f, 0x4d, 0xd1, 0x4f, 0xca, 0x4c, 0x2f, 0x49, 0x4c, 0xca, 0x49, 0xd5, 0xcf, 0xcc, 0x2b,
-	0x49, 0x2d, 0xca, 0x4b, 0xcc, 0xd1, 0x4f, 0x29, 0x2d, 0x4a, 0x2c, 0xc9, 0xcc, 0xcf, 0x8b, 0x2f,
-	0x28, 0xca, 0x2f, 0xc9, 0x87, 0x73, 0xf5, 0xc0, 0x5c, 0x21, 0x7e, 0xa8, 0x41, 0x60, 0x5e, 0x52,
-	0x69, 0x9a, 0x92, 0x16, 0x17, 0x87, 0x0b, 0x54, 0x89, 0x10, 0x3f, 0x17, 0x7b, 0x71, 0x6a, 0x72,
-	0x7e, 0x5e, 0x4a, 0xb1, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0xb3, 0x10, 0x2f, 0x17, 0x6b, 0x5e, 0x62,
-	0x5e, 0x7e, 0xb1, 0x04, 0x13, 0x90, 0xcb, 0xea, 0xa4, 0xc9, 0x25, 0x9c, 0x9c, 0x9f, 0xab, 0x87,
-	0x66, 0x84, 0x13, 0x2f, 0xcc, 0x80, 0x00, 0x90, 0x48, 0x00, 0xe3, 0x02, 0x46, 0xc6, 0x24, 0x36,
-	0xb0, 0xac, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0xc3, 0x14, 0xb7, 0x46, 0xb9, 0x00, 0x00, 0x00,
-}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto b/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto
deleted file mode 100644
index 15e9d44..0000000
--- a/go/src/google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto
+++ /dev/null
@@ -1,78 +0,0 @@
-// Copyright (c) 2015, Google Inc.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-syntax = "proto3";
-
-package google.protobuf;
-
-option java_generate_equals_and_hash = true;
-option java_multiple_files = true;
-option java_outer_classname = "DurationProto";
-option java_package = "com.google.protobuf";
-
-
-// A Duration represents a signed, fixed-length span of time represented
-// as a count of seconds and fractions of seconds at nanosecond
-// resolution. It is independent of any calendar and concepts like "day"
-// or "month". It is related to Timestamp in that the difference between
-// two Timestamp values is a Duration and it can be added or subtracted
-// from a Timestamp. Range is approximately +-10,000 years.
-//
-// Example 1: Compute Duration from two Timestamps in pseudo code.
-//
-//     Timestamp start = ...;
-//     Timestamp end = ...;
-//     Duration duration = ...;
-//
-//     duration.seconds = end.seconds - start.seconds;
-//     duration.nanos = end.nanos - start.nanos;
-//
-//     if (duration.seconds < 0 && duration.nanos > 0) {
-//       duration.seconds += 1;
-//       duration.nanos -= 1000000000;
-//     } else if (durations.seconds > 0 && duration.nanos < 0) {
-//       duration.seconds -= 1;
-//       duration.nanos += 1000000000;
-//     }
-//
-// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
-//
-//     Timestamp start = ...;
-//     Duration duration = ...;
-//     Timestamp end = ...;
-//
-//     end.seconds = start.seconds + duration.seconds;
-//     end.nanos = start.nanos + duration.nanos;
-//
-//     if (end.nanos < 0) {
-//       end.seconds -= 1;
-//       end.nanos += 1000000000;
-//     } else if (end.nanos >= 1000000000) {
-//       end.seconds += 1;
-//       end.nanos -= 1000000000;
-//     }
-//
-message Duration {
-  // Signed seconds of the span of time. Must be from -315,576,000,000
-  // to +315,576,000,000 inclusive.
-  int64 seconds = 1;
-
-  // Signed fractions of a second at nanosecond resolution of the span
-  // of time. Durations less than one second are represented with a 0
-  // `seconds` field and a positive or negative `nanos` field. For durations
-  // of one second or more, a non-zero value for the `nanos` field must be
-  // of the same sign as the `seconds` field. Must be from -999,999,999
-  // to +999,999,999 inclusive.
-  int32 nanos = 2;
-}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.pb.go
deleted file mode 100644
index 58e001c..0000000
--- a/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.pb.go
+++ /dev/null
@@ -1,55 +0,0 @@
-// Code generated by protoc-gen-go.
-// source: google.golang.org/cloud/bigtable/internal/empty/empty.proto
-// DO NOT EDIT!
-
-/*
-Package google_protobuf is a generated protocol buffer package.
-
-It is generated from these files:
-	google.golang.org/cloud/bigtable/internal/empty/empty.proto
-
-It has these top-level messages:
-	Empty
-*/
-package google_protobuf
-
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ = proto.Marshal
-var _ = fmt.Errorf
-var _ = math.Inf
-
-// A generic empty message that you can re-use to avoid defining duplicated
-// empty messages in your APIs. A typical example is to use it as the request
-// or the response type of an API method. For instance:
-//
-//     service Foo {
-//       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
-//     }
-//
-type Empty struct {
-}
-
-func (m *Empty) Reset()                    { *m = Empty{} }
-func (m *Empty) String() string            { return proto.CompactTextString(m) }
-func (*Empty) ProtoMessage()               {}
-func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
-
-func init() {
-	proto.RegisterType((*Empty)(nil), "google.protobuf.Empty")
-}
-
-var fileDescriptor0 = []byte{
-	// 120 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0xcf, 0x4f,
-	0xcf, 0x49, 0xd5, 0x4b, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0xcb, 0x2f, 0x4a, 0xd7, 0x4f, 0xce,
-	0xc9, 0x2f, 0x4d, 0xd1, 0x4f, 0xca, 0x4c, 0x2f, 0x49, 0x4c, 0xca, 0x49, 0xd5, 0xcf, 0xcc, 0x2b,
-	0x49, 0x2d, 0xca, 0x4b, 0xcc, 0xd1, 0x4f, 0xcd, 0x2d, 0x28, 0xa9, 0x84, 0x90, 0x7a, 0x05, 0x45,
-	0xf9, 0x25, 0xf9, 0x42, 0xfc, 0x50, 0xcd, 0x60, 0x5e, 0x52, 0x69, 0x9a, 0x12, 0x3b, 0x17, 0xab,
-	0x2b, 0x48, 0xde, 0x49, 0x99, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0x4d, 0xde, 0x89, 0x0b, 0x2c,
-	0x1b, 0x00, 0xe2, 0x06, 0x30, 0x26, 0xb1, 0x81, 0xc5, 0x8d, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff,
-	0xa0, 0x50, 0xb8, 0x83, 0x84, 0x00, 0x00, 0x00,
-}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.proto b/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.proto
deleted file mode 100644
index 43b06e8..0000000
--- a/go/src/google.golang.org/cloud/bigtable/internal/empty/empty.proto
+++ /dev/null
@@ -1,34 +0,0 @@
-// Copyright (c) 2015, Google Inc.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-syntax = "proto3";
-
-package google.protobuf;
-
-option java_multiple_files = true;
-option java_outer_classname = "EmptyProto";
-option java_package = "com.google.protobuf";
-
-
-// A generic empty message that you can re-use to avoid defining duplicated
-// empty messages in your APIs. A typical example is to use it as the request
-// or the response type of an API method. For instance:
-//
-//     service Foo {
-//       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
-//     }
-//
-message Empty {
-
-}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/regen.sh b/go/src/google.golang.org/cloud/bigtable/internal/regen.sh
index 6cfa168..862a1d9 100755
--- a/go/src/google.golang.org/cloud/bigtable/internal/regen.sh
+++ b/go/src/google.golang.org/cloud/bigtable/internal/regen.sh
@@ -58,11 +58,9 @@
     sed -f $import_fixes |
     # Drop the UndeleteCluster RPC method. It returns a google.longrunning.Operation.
     sed '/^  rpc UndeleteCluster(/,/^  }$/d' |
-    # Drop annotations, long-running operations and timestamps. They aren't supported (yet).
+    # Drop annotations and long-running operations. They aren't supported (yet).
     sed '/"google\/longrunning\/operations.proto"/d' |
     sed '/google.longrunning.Operation/d' |
-    sed '/"google\/protobuf\/timestamp.proto"/d' |
-    sed '/google\.protobuf\.Timestamp/d' |
     sed '/"google\/api\/annotations.proto"/d' |
     sed '/option.*google\.api\.http.*{.*};$/d' |
     cat > $PKG/$f
@@ -71,6 +69,6 @@
 # Run protoc once per package.
 for dir in $(find $PKG/internal -name '*.proto' | xargs dirname | sort | uniq); do
   echo 1>&2 "* $dir"
-  protoc --go_out=plugins=grpc:. $dir/*.proto
+  protoc --go_out=plugins=grpc,Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp,Mgoogle/protobuf/empty.proto=github.com/golang/protobuf/ptypes/empty:. $dir/*.proto
 done
 echo 1>&2 "All OK"
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.pb.go
new file mode 100644
index 0000000..24d10d6
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.pb.go
@@ -0,0 +1,125 @@
+// Code generated by protoc-gen-go.
+// source: google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto
+// DO NOT EDIT!
+
+/*
+Package google_rpc is a generated protocol buffer package.
+
+It is generated from these files:
+	google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto
+
+It has these top-level messages:
+	Status
+*/
+package google_rpc
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import google_protobuf "github.com/golang/protobuf/ptypes/any"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
+// The `Status` type defines a logical error model that is suitable for different
+// programming environments, including REST APIs and RPC APIs. It is used by
+// [gRPC](https://github.com/grpc). The error model is designed to be:
+//
+// - Simple to use and understand for most users
+// - Flexible enough to meet unexpected needs
+//
+// # Overview
+//
+// The `Status` message contains three pieces of data: error code, error message,
+// and error details. The error code should be an enum value of
+// [google.rpc.Code][google.rpc.Code], but it may accept additional error codes if needed.  The
+// error message should be a developer-facing English message that helps
+// developers *understand* and *resolve* the error. If a localized user-facing
+// error message is needed, put the localized message in the error details or
+// localize it in the client. The optional error details may contain arbitrary
+// information about the error. There is a predefined set of error detail types
+// in the package `google.rpc` which can be used for common error conditions.
+//
+// # Language mapping
+//
+// The `Status` message is the logical representation of the error model, but it
+// is not necessarily the actual wire format. When the `Status` message is
+// exposed in different client libraries and different wire protocols, it can be
+// mapped differently. For example, it will likely be mapped to some exceptions
+// in Java, but more likely mapped to some error codes in C.
+//
+// # Other uses
+//
+// The error model and the `Status` message can be used in a variety of
+// environments, either with or without APIs, to provide a
+// consistent developer experience across different environments.
+//
+// Example uses of this error model include:
+//
+// - Partial errors. If a service needs to return partial errors to the client,
+//     it may embed the `Status` in the normal response to indicate the partial
+//     errors.
+//
+// - Workflow errors. A typical workflow has multiple steps. Each step may
+//     have a `Status` message for error reporting purpose.
+//
+// - Batch operations. If a client uses batch request and batch response, the
+//     `Status` message should be used directly inside batch response, one for
+//     each error sub-response.
+//
+// - Asynchronous operations. If an API call embeds asynchronous operation
+//     results in its response, the status of those operations should be
+//     represented directly using the `Status` message.
+//
+// - Logging. If some API errors are stored in logs, the message `Status` could
+//     be used directly after any stripping needed for security/privacy reasons.
+type Status struct {
+	// The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
+	Code int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"`
+	// A developer-facing error message, which should be in English. Any
+	// user-facing error message should be localized and sent in the
+	// [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
+	Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
+	// A list of messages that carry the error details.  There will be a
+	// common set of message types for APIs to use.
+	Details []*google_protobuf.Any `protobuf:"bytes,3,rep,name=details" json:"details,omitempty"`
+}
+
+func (m *Status) Reset()                    { *m = Status{} }
+func (m *Status) String() string            { return proto.CompactTextString(m) }
+func (*Status) ProtoMessage()               {}
+func (*Status) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+func (m *Status) GetDetails() []*google_protobuf.Any {
+	if m != nil {
+		return m.Details
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*Status)(nil), "google.rpc.Status")
+}
+
+var fileDescriptor0 = []byte{
+	// 203 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x44, 0x8e, 0xb1, 0x6e, 0x85, 0x20,
+	0x18, 0x85, 0x43, 0x6d, 0x35, 0xc5, 0xa4, 0x03, 0xe9, 0x40, 0xbb, 0xd4, 0x74, 0x72, 0x82, 0xa4,
+	0x7d, 0x82, 0xba, 0x74, 0x35, 0xf6, 0x01, 0x0c, 0x20, 0x12, 0x13, 0xe4, 0x37, 0x80, 0x83, 0x6f,
+	0xdf, 0x5c, 0xd0, 0xdc, 0xed, 0x9c, 0xf0, 0x1d, 0xbe, 0x1f, 0xff, 0x1a, 0x00, 0x63, 0x35, 0x33,
+	0x60, 0x85, 0x33, 0x0c, 0xbc, 0xe1, 0xca, 0xc2, 0x3e, 0x71, 0xb9, 0x98, 0x28, 0xa4, 0xd5, 0x7c,
+	0x71, 0x51, 0x7b, 0x27, 0x2c, 0xf7, 0x9b, 0x1a, 0x43, 0x14, 0x71, 0x0f, 0xe3, 0xe6, 0x21, 0x02,
+	0xcf, 0x85, 0xa5, 0x42, 0xf0, 0xf9, 0x91, 0xdf, 0xd4, 0xfb, 0x5b, 0xce, 0x3c, 0xbd, 0xc8, 0x7d,
+	0xe6, 0xc2, 0x1d, 0x19, 0xfb, 0x9c, 0x71, 0xf9, 0x97, 0x66, 0x84, 0xe0, 0x47, 0x05, 0x93, 0xa6,
+	0xa8, 0x41, 0xed, 0xd3, 0x90, 0x32, 0xa1, 0xb8, 0x5a, 0x75, 0x08, 0xc2, 0x68, 0xfa, 0xd0, 0xa0,
+	0xf6, 0x79, 0xb8, 0x2a, 0x61, 0xb8, 0x9a, 0x74, 0x14, 0x8b, 0x0d, 0xb4, 0x68, 0x8a, 0xb6, 0xfe,
+	0x7a, 0x65, 0xa7, 0xf0, 0x92, 0xb0, 0x1f, 0x77, 0x0c, 0x17, 0xd4, 0x7d, 0xe0, 0x17, 0x05, 0x2b,
+	0xbb, 0x1f, 0xd5, 0xd5, 0xd9, 0xdb, 0xdf, 0xf0, 0x1e, 0xc9, 0x32, 0xed, 0xbe, 0xff, 0x03, 0x00,
+	0x00, 0xff, 0xff, 0x77, 0xd3, 0x68, 0xaf, 0x01, 0x01, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto b/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto
new file mode 100644
index 0000000..8fca6ab
--- /dev/null
+++ b/go/src/google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto
@@ -0,0 +1,90 @@
+// Copyright (c) 2015, Google Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+syntax = "proto3";
+
+package google.rpc;
+
+import "google/protobuf/any.proto";
+
+option java_multiple_files = true;
+option java_outer_classname = "StatusProto";
+option java_package = "com.google.rpc";
+
+
+// The `Status` type defines a logical error model that is suitable for different
+// programming environments, including REST APIs and RPC APIs. It is used by
+// [gRPC](https://github.com/grpc). The error model is designed to be:
+//
+// - Simple to use and understand for most users
+// - Flexible enough to meet unexpected needs
+//
+// # Overview
+//
+// The `Status` message contains three pieces of data: error code, error message,
+// and error details. The error code should be an enum value of
+// [google.rpc.Code][google.rpc.Code], but it may accept additional error codes if needed.  The
+// error message should be a developer-facing English message that helps
+// developers *understand* and *resolve* the error. If a localized user-facing
+// error message is needed, put the localized message in the error details or
+// localize it in the client. The optional error details may contain arbitrary
+// information about the error. There is a predefined set of error detail types
+// in the package `google.rpc` which can be used for common error conditions.
+//
+// # Language mapping
+//
+// The `Status` message is the logical representation of the error model, but it
+// is not necessarily the actual wire format. When the `Status` message is
+// exposed in different client libraries and different wire protocols, it can be
+// mapped differently. For example, it will likely be mapped to some exceptions
+// in Java, but more likely mapped to some error codes in C.
+//
+// # Other uses
+//
+// The error model and the `Status` message can be used in a variety of
+// environments, either with or without APIs, to provide a
+// consistent developer experience across different environments.
+//
+// Example uses of this error model include:
+//
+// - Partial errors. If a service needs to return partial errors to the client,
+//     it may embed the `Status` in the normal response to indicate the partial
+//     errors.
+//
+// - Workflow errors. A typical workflow has multiple steps. Each step may
+//     have a `Status` message for error reporting purpose.
+//
+// - Batch operations. If a client uses batch request and batch response, the
+//     `Status` message should be used directly inside batch response, one for
+//     each error sub-response.
+//
+// - Asynchronous operations. If an API call embeds asynchronous operation
+//     results in its response, the status of those operations should be
+//     represented directly using the `Status` message.
+//
+// - Logging. If some API errors are stored in logs, the message `Status` could
+//     be used directly after any stripping needed for security/privacy reasons.
+message Status {
+  // The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
+  int32 code = 1;
+
+  // A developer-facing error message, which should be in English. Any
+  // user-facing error message should be localized and sent in the
+  // [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
+  string message = 2;
+
+  // A list of messages that carry the error details.  There will be a
+  // common set of message types for APIs to use.
+  repeated google.protobuf.Any details = 3;
+}
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.pb.go
index d72d108..3f31300 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.pb.go
@@ -8,7 +8,7 @@
 import fmt "fmt"
 import math "math"
 import google_bigtable_v11 "google.golang.org/cloud/bigtable/internal/data_proto"
-import google_protobuf "google.golang.org/cloud/bigtable/internal/empty"
+import google_protobuf1 "github.com/golang/protobuf/ptypes/empty"
 
 import (
 	context "golang.org/x/net/context"
@@ -24,6 +24,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for BigtableService service
 
 type BigtableServiceClient interface {
@@ -39,7 +43,11 @@
 	SampleRowKeys(ctx context.Context, in *SampleRowKeysRequest, opts ...grpc.CallOption) (BigtableService_SampleRowKeysClient, error)
 	// Mutates a row atomically. Cells already present in the row are left
 	// unchanged unless explicitly changed by 'mutation'.
-	MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error)
+	MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error)
+	// Mutates multiple rows in a batch. Each individual row is mutated
+	// atomically as in MutateRow, but the entire batch is not executed
+	// atomically.
+	MutateRows(ctx context.Context, in *MutateRowsRequest, opts ...grpc.CallOption) (*MutateRowsResponse, error)
 	// Mutates a row atomically based on the output of a predicate Reader filter.
 	CheckAndMutateRow(ctx context.Context, in *CheckAndMutateRowRequest, opts ...grpc.CallOption) (*CheckAndMutateRowResponse, error)
 	// Modifies a row atomically, reading the latest existing timestamp/value from
@@ -121,8 +129,8 @@
 	return m, nil
 }
 
-func (c *bigtableServiceClient) MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) {
-	out := new(google_protobuf.Empty)
+func (c *bigtableServiceClient) MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error) {
+	out := new(google_protobuf1.Empty)
 	err := grpc.Invoke(ctx, "/google.bigtable.v1.BigtableService/MutateRow", in, out, c.cc, opts...)
 	if err != nil {
 		return nil, err
@@ -130,6 +138,15 @@
 	return out, nil
 }
 
+func (c *bigtableServiceClient) MutateRows(ctx context.Context, in *MutateRowsRequest, opts ...grpc.CallOption) (*MutateRowsResponse, error) {
+	out := new(MutateRowsResponse)
+	err := grpc.Invoke(ctx, "/google.bigtable.v1.BigtableService/MutateRows", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
 func (c *bigtableServiceClient) CheckAndMutateRow(ctx context.Context, in *CheckAndMutateRowRequest, opts ...grpc.CallOption) (*CheckAndMutateRowResponse, error) {
 	out := new(CheckAndMutateRowResponse)
 	err := grpc.Invoke(ctx, "/google.bigtable.v1.BigtableService/CheckAndMutateRow", in, out, c.cc, opts...)
@@ -163,7 +180,11 @@
 	SampleRowKeys(*SampleRowKeysRequest, BigtableService_SampleRowKeysServer) error
 	// Mutates a row atomically. Cells already present in the row are left
 	// unchanged unless explicitly changed by 'mutation'.
-	MutateRow(context.Context, *MutateRowRequest) (*google_protobuf.Empty, error)
+	MutateRow(context.Context, *MutateRowRequest) (*google_protobuf1.Empty, error)
+	// Mutates multiple rows in a batch. Each individual row is mutated
+	// atomically as in MutateRow, but the entire batch is not executed
+	// atomically.
+	MutateRows(context.Context, *MutateRowsRequest) (*MutateRowsResponse, error)
 	// Mutates a row atomically based on the output of a predicate Reader filter.
 	CheckAndMutateRow(context.Context, *CheckAndMutateRowRequest) (*CheckAndMutateRowResponse, error)
 	// Modifies a row atomically, reading the latest existing timestamp/value from
@@ -219,40 +240,76 @@
 	return x.ServerStream.SendMsg(m)
 }
 
-func _BigtableService_MutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableService_MutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(MutateRowRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableServiceServer).MutateRow(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableServiceServer).MutateRow(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.v1.BigtableService/MutateRow",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableServiceServer).MutateRow(ctx, req.(*MutateRowRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableService_CheckAndMutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableService_MutateRows_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(MutateRowsRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(BigtableServiceServer).MutateRows(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.v1.BigtableService/MutateRows",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableServiceServer).MutateRows(ctx, req.(*MutateRowsRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _BigtableService_CheckAndMutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(CheckAndMutateRowRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableServiceServer).CheckAndMutateRow(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableServiceServer).CheckAndMutateRow(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.v1.BigtableService/CheckAndMutateRow",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableServiceServer).CheckAndMutateRow(ctx, req.(*CheckAndMutateRowRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableService_ReadModifyWriteRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableService_ReadModifyWriteRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(ReadModifyWriteRowRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableServiceServer).ReadModifyWriteRow(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableServiceServer).ReadModifyWriteRow(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.v1.BigtableService/ReadModifyWriteRow",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableServiceServer).ReadModifyWriteRow(ctx, req.(*ReadModifyWriteRowRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _BigtableService_serviceDesc = grpc.ServiceDesc{
@@ -264,6 +321,10 @@
 			Handler:    _BigtableService_MutateRow_Handler,
 		},
 		{
+			MethodName: "MutateRows",
+			Handler:    _BigtableService_MutateRows_Handler,
+		},
+		{
 			MethodName: "CheckAndMutateRow",
 			Handler:    _BigtableService_CheckAndMutateRow_Handler,
 		},
@@ -287,26 +348,28 @@
 }
 
 var fileDescriptor1 = []byte{
-	// 335 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x92, 0x4f, 0x4b, 0xc3, 0x40,
-	0x10, 0xc5, 0x1b, 0x04, 0xd1, 0x05, 0x11, 0x17, 0xac, 0xd0, 0x63, 0xf5, 0xa0, 0x60, 0x37, 0xfe,
-	0xbb, 0x79, 0xb2, 0x22, 0x08, 0xa5, 0x52, 0xda, 0x43, 0xf1, 0x62, 0xd9, 0x24, 0xd3, 0x6d, 0x70,
-	0x93, 0x89, 0xbb, 0x9b, 0x96, 0x7e, 0x03, 0xbf, 0xb1, 0x57, 0x4d, 0x93, 0xad, 0xb6, 0x8d, 0x92,
-	0x83, 0x97, 0x1c, 0xe6, 0xfd, 0xe6, 0xbd, 0xc9, 0x63, 0xc9, 0x93, 0x40, 0x14, 0x12, 0x98, 0x40,
-	0xc9, 0x63, 0xc1, 0x50, 0x09, 0xd7, 0x97, 0x98, 0x06, 0xae, 0x17, 0x0a, 0xc3, 0x3d, 0x09, 0x6e,
-	0x18, 0x1b, 0x50, 0x31, 0x97, 0xae, 0x06, 0x35, 0x0d, 0x7d, 0x18, 0x25, 0x0a, 0x0d, 0x2e, 0xf5,
-	0x51, 0x31, 0x66, 0x8b, 0x31, 0xa5, 0x85, 0x9f, 0x95, 0xd9, 0xf4, 0xb2, 0xf1, 0x58, 0x3d, 0x23,
-	0xe0, 0x86, 0xaf, 0x07, 0x64, 0xb3, 0xdc, 0xbd, 0x31, 0xfc, 0xaf, 0x6b, 0x47, 0x11, 0x68, 0xcd,
-	0x05, 0xe8, 0xc2, 0xf8, 0xb6, 0xba, 0x31, 0x44, 0x89, 0x99, 0xe7, 0xdf, 0x7c, 0xf9, 0xea, 0x63,
-	0x8b, 0xec, 0xb7, 0x0b, 0x6e, 0x90, 0xfb, 0xd3, 0x67, 0xb2, 0xd3, 0x07, 0x1e, 0xf4, 0x71, 0xa6,
-	0xe9, 0x31, 0xdb, 0x2c, 0x85, 0x59, 0xb5, 0x0f, 0x6f, 0x29, 0x68, 0xd3, 0x38, 0xf9, 0x1b, 0xd2,
-	0x09, 0xc6, 0x1a, 0x9a, 0xb5, 0x0b, 0x87, 0x4e, 0xc8, 0xde, 0x80, 0x47, 0x89, 0x84, 0x2f, 0xa5,
-	0x03, 0x73, 0x4d, 0x4f, 0xcb, 0x56, 0x57, 0x10, 0x1b, 0x72, 0x56, 0x81, 0xfc, 0x91, 0xd4, 0x21,
-	0xbb, 0xdd, 0xd4, 0x70, 0x93, 0x89, 0xb4, 0xf4, 0xc0, 0xa5, 0x6c, 0x13, 0xea, 0x96, 0x5a, 0x54,
-	0xe3, 0xa5, 0x63, 0xf6, 0x90, 0x35, 0xd5, 0xac, 0x51, 0x45, 0x0e, 0xee, 0x27, 0xe0, 0xbf, 0xde,
-	0xc5, 0xc1, 0xb7, 0xe9, 0x79, 0x99, 0xe9, 0x06, 0x66, 0xcd, 0x5b, 0x15, 0x69, 0xfb, 0x0b, 0xf4,
-	0x85, 0xd0, 0xac, 0xc2, 0x2e, 0x06, 0xe1, 0x78, 0x3e, 0x54, 0x61, 0x1e, 0xda, 0xfa, 0xad, 0xea,
-	0x55, 0xce, 0xa6, 0x1e, 0x95, 0xe2, 0x38, 0x6b, 0xd6, 0xda, 0x37, 0xa4, 0xee, 0x63, 0x54, 0xa2,
-	0xb7, 0x0f, 0xd7, 0x1e, 0x84, 0xee, 0x65, 0x7d, 0xf4, 0x9c, 0x77, 0xc7, 0xf1, 0xb6, 0x17, 0xdd,
-	0x5c, 0x7f, 0x06, 0x00, 0x00, 0xff, 0xff, 0x63, 0x50, 0x8b, 0x8e, 0x7c, 0x03, 0x00, 0x00,
+	// 353 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x92, 0xcd, 0x4a, 0xc3, 0x40,
+	0x14, 0x85, 0x1b, 0x10, 0xd1, 0x01, 0x11, 0x07, 0xac, 0x50, 0x77, 0xf5, 0x07, 0x05, 0x3b, 0xf1,
+	0xef, 0x05, 0xac, 0x08, 0x42, 0xa9, 0x94, 0x74, 0x51, 0x5c, 0x68, 0x99, 0x24, 0xb7, 0xd3, 0x60,
+	0x92, 0x1b, 0x33, 0x93, 0x96, 0xbe, 0x81, 0x4f, 0xe8, 0xf3, 0x48, 0x9a, 0x4c, 0xb4, 0x6d, 0xac,
+	0x59, 0xb8, 0xec, 0x3d, 0xdf, 0x3d, 0xe7, 0xf4, 0x4e, 0xc8, 0x93, 0x40, 0x14, 0x3e, 0x30, 0x81,
+	0x3e, 0x0f, 0x05, 0xc3, 0x58, 0x98, 0x8e, 0x8f, 0x89, 0x6b, 0xda, 0x9e, 0x50, 0xdc, 0xf6, 0xc1,
+	0xf4, 0x42, 0x05, 0x71, 0xc8, 0x7d, 0x53, 0x42, 0x3c, 0xf1, 0x1c, 0x18, 0x46, 0x31, 0x2a, 0x2c,
+	0xf4, 0x61, 0x3e, 0x66, 0xf3, 0x31, 0xa5, 0xb9, 0x9f, 0x96, 0xd9, 0xe4, 0xaa, 0xf1, 0x58, 0x3d,
+	0xc3, 0xe5, 0x8a, 0x2f, 0x07, 0xa4, 0xb3, 0xcc, 0xbd, 0x31, 0xf8, 0xaf, 0xb6, 0xc3, 0x00, 0xa4,
+	0xe4, 0x02, 0x64, 0x6e, 0x7c, 0x98, 0x19, 0x9b, 0xf3, 0x5f, 0x76, 0x32, 0x32, 0x21, 0x88, 0xd4,
+	0x2c, 0x13, 0xaf, 0x3f, 0x37, 0xc8, 0x6e, 0x3b, 0x37, 0xe8, 0x67, 0xfb, 0xf4, 0x99, 0x6c, 0x59,
+	0xc0, 0x5d, 0x0b, 0xa7, 0x92, 0x1e, 0xb1, 0xd5, 0x3f, 0xcd, 0xb4, 0x6a, 0xc1, 0x7b, 0x02, 0x52,
+	0x35, 0x8e, 0xd7, 0x43, 0x32, 0xc2, 0x50, 0x42, 0xb3, 0x76, 0x69, 0xd0, 0x31, 0xd9, 0xe9, 0xf3,
+	0x20, 0xf2, 0xc1, 0xc2, 0x69, 0x07, 0x66, 0x92, 0x9e, 0x95, 0xad, 0x2e, 0x20, 0x3a, 0xe4, 0xbc,
+	0x02, 0xf9, 0x23, 0xa9, 0x43, 0xb6, 0xbb, 0x89, 0xe2, 0x2a, 0x15, 0x69, 0x69, 0xc1, 0x42, 0xd6,
+	0x09, 0x75, 0x4d, 0xe9, 0x4b, 0xb1, 0x87, 0xf4, 0x52, 0xcd, 0x1a, 0x7d, 0x21, 0xa4, 0xa0, 0x25,
+	0x3d, 0x59, 0xeb, 0x56, 0x14, 0x3e, 0xfd, 0x0b, 0xd3, 0x6d, 0x69, 0x4c, 0xf6, 0xee, 0xc7, 0xe0,
+	0xbc, 0xdd, 0x85, 0xee, 0x77, 0xe7, 0x8b, 0xb2, 0xf5, 0x15, 0x4c, 0x87, 0xb5, 0x2a, 0xd2, 0x45,
+	0xe6, 0x2b, 0xa1, 0xe9, 0x0b, 0x75, 0xd1, 0xf5, 0x46, 0xb3, 0x41, 0xec, 0x65, 0xa1, 0xad, 0xdf,
+	0x5e, 0x72, 0x91, 0xd3, 0xa9, 0x07, 0xa5, 0x38, 0x4e, 0x9b, 0xb5, 0xf6, 0x2d, 0xa9, 0x3b, 0x18,
+	0x94, 0xe8, 0xed, 0xfd, 0xa5, 0xef, 0x4d, 0xf6, 0xd2, 0x73, 0xf7, 0x8c, 0x0f, 0xc3, 0xb0, 0x37,
+	0xe7, 0xa7, 0xbf, 0xf9, 0x0a, 0x00, 0x00, 0xff, 0xff, 0xa3, 0x7b, 0xf2, 0x8a, 0xbb, 0x03, 0x00,
+	0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.proto b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.proto
index 814940a..b62ee48 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service.proto
@@ -18,7 +18,7 @@
 
 import "google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto";
 import "google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.proto";
-import "google.golang.org/cloud/bigtable/internal/empty/empty.proto";
+import "google/protobuf/empty.proto";
 
 option java_generic_services = true;
 option java_multiple_files = true;
@@ -47,6 +47,12 @@
   rpc MutateRow(MutateRowRequest) returns (google.protobuf.Empty) {
   }
 
+  // Mutates multiple rows in a batch. Each individual row is mutated
+  // atomically as in MutateRow, but the entire batch is not executed
+  // atomically.
+  rpc MutateRows(MutateRowsRequest) returns (MutateRowsResponse) {
+  }
+
   // Mutates a row atomically based on the output of a predicate Reader filter.
   rpc CheckAndMutateRow(CheckAndMutateRowRequest) returns (CheckAndMutateRowResponse) {
   }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.pb.go
index a30b96e..41fd7cd 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.pb.go
@@ -15,6 +15,8 @@
 	SampleRowKeysRequest
 	SampleRowKeysResponse
 	MutateRowRequest
+	MutateRowsRequest
+	MutateRowsResponse
 	CheckAndMutateRowRequest
 	CheckAndMutateRowResponse
 	ReadModifyWriteRowRequest
@@ -25,21 +27,27 @@
 import fmt "fmt"
 import math "math"
 import google_bigtable_v11 "google.golang.org/cloud/bigtable/internal/data_proto"
+import google_rpc "google.golang.org/cloud/bigtable/internal/rpc_status_proto"
 
 // Reference imports to suppress errors if they are not otherwise used.
 var _ = proto.Marshal
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 // Request message for BigtableServer.ReadRows.
 type ReadRowsRequest struct {
 	// The unique name of the table from which to read.
-	TableName string `protobuf:"bytes,1,opt,name=table_name" json:"table_name,omitempty"`
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
 	// If neither row_key nor row_range is set, reads from all rows.
 	//
 	// Types that are valid to be assigned to Target:
 	//	*ReadRowsRequest_RowKey
 	//	*ReadRowsRequest_RowRange
+	//	*ReadRowsRequest_RowSet
 	Target isReadRowsRequest_Target `protobuf_oneof:"target"`
 	// The filter to apply to the contents of the specified row(s). If unset,
 	// reads the entire table.
@@ -50,11 +58,11 @@
 	// the response stream, which increases throughput but breaks this guarantee,
 	// and may force the client to use more memory to buffer partially-received
 	// rows. Cannot be set to true when specifying "num_rows_limit".
-	AllowRowInterleaving bool `protobuf:"varint,6,opt,name=allow_row_interleaving" json:"allow_row_interleaving,omitempty"`
+	AllowRowInterleaving bool `protobuf:"varint,6,opt,name=allow_row_interleaving,json=allowRowInterleaving" json:"allow_row_interleaving,omitempty"`
 	// The read will terminate after committing to N rows' worth of results. The
 	// default (zero) is to return all results.
 	// Note that "allow_row_interleaving" cannot be set to true when this is set.
-	NumRowsLimit int64 `protobuf:"varint,7,opt,name=num_rows_limit" json:"num_rows_limit,omitempty"`
+	NumRowsLimit int64 `protobuf:"varint,7,opt,name=num_rows_limit,json=numRowsLimit" json:"num_rows_limit,omitempty"`
 }
 
 func (m *ReadRowsRequest) Reset()                    { *m = ReadRowsRequest{} }
@@ -67,14 +75,18 @@
 }
 
 type ReadRowsRequest_RowKey struct {
-	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,proto3,oneof"`
+	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3,oneof"`
 }
 type ReadRowsRequest_RowRange struct {
-	RowRange *google_bigtable_v11.RowRange `protobuf:"bytes,3,opt,name=row_range,oneof"`
+	RowRange *google_bigtable_v11.RowRange `protobuf:"bytes,3,opt,name=row_range,json=rowRange,oneof"`
+}
+type ReadRowsRequest_RowSet struct {
+	RowSet *google_bigtable_v11.RowSet `protobuf:"bytes,8,opt,name=row_set,json=rowSet,oneof"`
 }
 
 func (*ReadRowsRequest_RowKey) isReadRowsRequest_Target()   {}
 func (*ReadRowsRequest_RowRange) isReadRowsRequest_Target() {}
+func (*ReadRowsRequest_RowSet) isReadRowsRequest_Target()   {}
 
 func (m *ReadRowsRequest) GetTarget() isReadRowsRequest_Target {
 	if m != nil {
@@ -97,6 +109,13 @@
 	return nil
 }
 
+func (m *ReadRowsRequest) GetRowSet() *google_bigtable_v11.RowSet {
+	if x, ok := m.GetTarget().(*ReadRowsRequest_RowSet); ok {
+		return x.RowSet
+	}
+	return nil
+}
+
 func (m *ReadRowsRequest) GetFilter() *google_bigtable_v11.RowFilter {
 	if m != nil {
 		return m.Filter
@@ -109,6 +128,7 @@
 	return _ReadRowsRequest_OneofMarshaler, _ReadRowsRequest_OneofUnmarshaler, _ReadRowsRequest_OneofSizer, []interface{}{
 		(*ReadRowsRequest_RowKey)(nil),
 		(*ReadRowsRequest_RowRange)(nil),
+		(*ReadRowsRequest_RowSet)(nil),
 	}
 }
 
@@ -124,6 +144,11 @@
 		if err := b.EncodeMessage(x.RowRange); err != nil {
 			return err
 		}
+	case *ReadRowsRequest_RowSet:
+		b.EncodeVarint(8<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.RowSet); err != nil {
+			return err
+		}
 	case nil:
 	default:
 		return fmt.Errorf("ReadRowsRequest.Target has unexpected type %T", x)
@@ -149,6 +174,14 @@
 		err := b.DecodeMessage(msg)
 		m.Target = &ReadRowsRequest_RowRange{msg}
 		return true, err
+	case 8: // target.row_set
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(google_bigtable_v11.RowSet)
+		err := b.DecodeMessage(msg)
+		m.Target = &ReadRowsRequest_RowSet{msg}
+		return true, err
 	default:
 		return false, nil
 	}
@@ -167,6 +200,11 @@
 		n += proto.SizeVarint(3<<3 | proto.WireBytes)
 		n += proto.SizeVarint(uint64(s))
 		n += s
+	case *ReadRowsRequest_RowSet:
+		s := proto.Size(x.RowSet)
+		n += proto.SizeVarint(8<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
 	case nil:
 	default:
 		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
@@ -179,7 +217,7 @@
 	// The key of the row for which we're receiving data.
 	// Results will be received in increasing row key order, unless
 	// "allow_row_interleaving" was specified in the request.
-	RowKey []byte `protobuf:"bytes,1,opt,name=row_key,proto3" json:"row_key,omitempty"`
+	RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
 	// One or more chunks of the row specified by "row_key".
 	Chunks []*ReadRowsResponse_Chunk `protobuf:"bytes,2,rep,name=chunks" json:"chunks,omitempty"`
 }
@@ -216,13 +254,13 @@
 }
 
 type ReadRowsResponse_Chunk_RowContents struct {
-	RowContents *google_bigtable_v11.Family `protobuf:"bytes,1,opt,name=row_contents,oneof"`
+	RowContents *google_bigtable_v11.Family `protobuf:"bytes,1,opt,name=row_contents,json=rowContents,oneof"`
 }
 type ReadRowsResponse_Chunk_ResetRow struct {
-	ResetRow bool `protobuf:"varint,2,opt,name=reset_row,oneof"`
+	ResetRow bool `protobuf:"varint,2,opt,name=reset_row,json=resetRow,oneof"`
 }
 type ReadRowsResponse_Chunk_CommitRow struct {
-	CommitRow bool `protobuf:"varint,3,opt,name=commit_row,oneof"`
+	CommitRow bool `protobuf:"varint,3,opt,name=commit_row,json=commitRow,oneof"`
 }
 
 func (*ReadRowsResponse_Chunk_RowContents) isReadRowsResponse_Chunk_Chunk() {}
@@ -351,7 +389,7 @@
 // Request message for BigtableService.SampleRowKeys.
 type SampleRowKeysRequest struct {
 	// The unique name of the table from which to sample row keys.
-	TableName string `protobuf:"bytes,1,opt,name=table_name" json:"table_name,omitempty"`
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
 }
 
 func (m *SampleRowKeysRequest) Reset()                    { *m = SampleRowKeysRequest{} }
@@ -368,12 +406,12 @@
 	// Note that row keys in this list may not have ever been written to or read
 	// from, and users should therefore not make any assumptions about the row key
 	// structure that are specific to their use case.
-	RowKey []byte `protobuf:"bytes,1,opt,name=row_key,proto3" json:"row_key,omitempty"`
+	RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
 	// Approximate total storage space used by all rows in the table which precede
 	// "row_key". Buffering the contents of all rows between two subsequent
 	// samples would require space roughly equal to the difference in their
 	// "offset_bytes" fields.
-	OffsetBytes int64 `protobuf:"varint,2,opt,name=offset_bytes" json:"offset_bytes,omitempty"`
+	OffsetBytes int64 `protobuf:"varint,2,opt,name=offset_bytes,json=offsetBytes" json:"offset_bytes,omitempty"`
 }
 
 func (m *SampleRowKeysResponse) Reset()                    { *m = SampleRowKeysResponse{} }
@@ -384,9 +422,9 @@
 // Request message for BigtableService.MutateRow.
 type MutateRowRequest struct {
 	// The unique name of the table to which the mutation should be applied.
-	TableName string `protobuf:"bytes,1,opt,name=table_name" json:"table_name,omitempty"`
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
 	// The key of the row to which the mutation should be applied.
-	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,proto3" json:"row_key,omitempty"`
+	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
 	// Changes to be atomically applied to the specified row. Entries are applied
 	// in order, meaning that earlier mutations can be masked by later ones.
 	// Must contain at least one entry and at most 100000.
@@ -405,36 +443,104 @@
 	return nil
 }
 
+// Request message for BigtableService.MutateRows.
+type MutateRowsRequest struct {
+	// The unique name of the table to which the mutations should be applied.
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
+	// The row keys/mutations to be applied in bulk.
+	// Each entry is applied as an atomic mutation, but the entries may be
+	// applied in arbitrary order (even between entries for the same row).
+	// At least one entry must be specified, and in total the entries may
+	// contain at most 100000 mutations.
+	Entries []*MutateRowsRequest_Entry `protobuf:"bytes,2,rep,name=entries" json:"entries,omitempty"`
+}
+
+func (m *MutateRowsRequest) Reset()                    { *m = MutateRowsRequest{} }
+func (m *MutateRowsRequest) String() string            { return proto.CompactTextString(m) }
+func (*MutateRowsRequest) ProtoMessage()               {}
+func (*MutateRowsRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+
+func (m *MutateRowsRequest) GetEntries() []*MutateRowsRequest_Entry {
+	if m != nil {
+		return m.Entries
+	}
+	return nil
+}
+
+type MutateRowsRequest_Entry struct {
+	// The key of the row to which the `mutations` should be applied.
+	RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
+	// Changes to be atomically applied to the specified row. Mutations are
+	// applied in order, meaning that earlier mutations can be masked by
+	// later ones.
+	// At least one mutation must be specified.
+	Mutations []*google_bigtable_v11.Mutation `protobuf:"bytes,2,rep,name=mutations" json:"mutations,omitempty"`
+}
+
+func (m *MutateRowsRequest_Entry) Reset()                    { *m = MutateRowsRequest_Entry{} }
+func (m *MutateRowsRequest_Entry) String() string            { return proto.CompactTextString(m) }
+func (*MutateRowsRequest_Entry) ProtoMessage()               {}
+func (*MutateRowsRequest_Entry) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5, 0} }
+
+func (m *MutateRowsRequest_Entry) GetMutations() []*google_bigtable_v11.Mutation {
+	if m != nil {
+		return m.Mutations
+	}
+	return nil
+}
+
+// Response message for BigtableService.MutateRows.
+type MutateRowsResponse struct {
+	// The results for each Entry from the request, presented in the order
+	// in which the entries were originally given.
+	// Depending on how requests are batched during execution, it is possible
+	// for one Entry to fail due to an error with another Entry. In the event
+	// that this occurs, the same error will be reported for both entries.
+	Statuses []*google_rpc.Status `protobuf:"bytes,1,rep,name=statuses" json:"statuses,omitempty"`
+}
+
+func (m *MutateRowsResponse) Reset()                    { *m = MutateRowsResponse{} }
+func (m *MutateRowsResponse) String() string            { return proto.CompactTextString(m) }
+func (*MutateRowsResponse) ProtoMessage()               {}
+func (*MutateRowsResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+
+func (m *MutateRowsResponse) GetStatuses() []*google_rpc.Status {
+	if m != nil {
+		return m.Statuses
+	}
+	return nil
+}
+
 // Request message for BigtableService.CheckAndMutateRowRequest
 type CheckAndMutateRowRequest struct {
 	// The unique name of the table to which the conditional mutation should be
 	// applied.
-	TableName string `protobuf:"bytes,1,opt,name=table_name" json:"table_name,omitempty"`
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
 	// The key of the row to which the conditional mutation should be applied.
-	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,proto3" json:"row_key,omitempty"`
+	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
 	// The filter to be applied to the contents of the specified row. Depending
 	// on whether or not any results are yielded, either "true_mutations" or
 	// "false_mutations" will be executed. If unset, checks that the row contains
 	// any values at all.
-	PredicateFilter *google_bigtable_v11.RowFilter `protobuf:"bytes,6,opt,name=predicate_filter" json:"predicate_filter,omitempty"`
+	PredicateFilter *google_bigtable_v11.RowFilter `protobuf:"bytes,6,opt,name=predicate_filter,json=predicateFilter" json:"predicate_filter,omitempty"`
 	// Changes to be atomically applied to the specified row if "predicate_filter"
 	// yields at least one cell when applied to "row_key". Entries are applied in
 	// order, meaning that earlier mutations can be masked by later ones.
 	// Must contain at least one entry if "false_mutations" is empty, and at most
 	// 100000.
-	TrueMutations []*google_bigtable_v11.Mutation `protobuf:"bytes,4,rep,name=true_mutations" json:"true_mutations,omitempty"`
+	TrueMutations []*google_bigtable_v11.Mutation `protobuf:"bytes,4,rep,name=true_mutations,json=trueMutations" json:"true_mutations,omitempty"`
 	// Changes to be atomically applied to the specified row if "predicate_filter"
 	// does not yield any cells when applied to "row_key". Entries are applied in
 	// order, meaning that earlier mutations can be masked by later ones.
 	// Must contain at least one entry if "true_mutations" is empty, and at most
 	// 100000.
-	FalseMutations []*google_bigtable_v11.Mutation `protobuf:"bytes,5,rep,name=false_mutations" json:"false_mutations,omitempty"`
+	FalseMutations []*google_bigtable_v11.Mutation `protobuf:"bytes,5,rep,name=false_mutations,json=falseMutations" json:"false_mutations,omitempty"`
 }
 
 func (m *CheckAndMutateRowRequest) Reset()                    { *m = CheckAndMutateRowRequest{} }
 func (m *CheckAndMutateRowRequest) String() string            { return proto.CompactTextString(m) }
 func (*CheckAndMutateRowRequest) ProtoMessage()               {}
-func (*CheckAndMutateRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+func (*CheckAndMutateRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
 
 func (m *CheckAndMutateRowRequest) GetPredicateFilter() *google_bigtable_v11.RowFilter {
 	if m != nil {
@@ -461,21 +567,21 @@
 type CheckAndMutateRowResponse struct {
 	// Whether or not the request's "predicate_filter" yielded any results for
 	// the specified row.
-	PredicateMatched bool `protobuf:"varint,1,opt,name=predicate_matched" json:"predicate_matched,omitempty"`
+	PredicateMatched bool `protobuf:"varint,1,opt,name=predicate_matched,json=predicateMatched" json:"predicate_matched,omitempty"`
 }
 
 func (m *CheckAndMutateRowResponse) Reset()                    { *m = CheckAndMutateRowResponse{} }
 func (m *CheckAndMutateRowResponse) String() string            { return proto.CompactTextString(m) }
 func (*CheckAndMutateRowResponse) ProtoMessage()               {}
-func (*CheckAndMutateRowResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+func (*CheckAndMutateRowResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
 
 // Request message for BigtableService.ReadModifyWriteRowRequest.
 type ReadModifyWriteRowRequest struct {
 	// The unique name of the table to which the read/modify/write rules should be
 	// applied.
-	TableName string `protobuf:"bytes,1,opt,name=table_name" json:"table_name,omitempty"`
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
 	// The key of the row to which the read/modify/write rules should be applied.
-	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,proto3" json:"row_key,omitempty"`
+	RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
 	// Rules specifying how the specified row's contents are to be transformed
 	// into writes. Entries are applied in order, meaning that earlier rules will
 	// affect the results of later ones.
@@ -485,7 +591,7 @@
 func (m *ReadModifyWriteRowRequest) Reset()                    { *m = ReadModifyWriteRowRequest{} }
 func (m *ReadModifyWriteRowRequest) String() string            { return proto.CompactTextString(m) }
 func (*ReadModifyWriteRowRequest) ProtoMessage()               {}
-func (*ReadModifyWriteRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+func (*ReadModifyWriteRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
 
 func (m *ReadModifyWriteRowRequest) GetRules() []*google_bigtable_v11.ReadModifyWriteRule {
 	if m != nil {
@@ -501,47 +607,64 @@
 	proto.RegisterType((*SampleRowKeysRequest)(nil), "google.bigtable.v1.SampleRowKeysRequest")
 	proto.RegisterType((*SampleRowKeysResponse)(nil), "google.bigtable.v1.SampleRowKeysResponse")
 	proto.RegisterType((*MutateRowRequest)(nil), "google.bigtable.v1.MutateRowRequest")
+	proto.RegisterType((*MutateRowsRequest)(nil), "google.bigtable.v1.MutateRowsRequest")
+	proto.RegisterType((*MutateRowsRequest_Entry)(nil), "google.bigtable.v1.MutateRowsRequest.Entry")
+	proto.RegisterType((*MutateRowsResponse)(nil), "google.bigtable.v1.MutateRowsResponse")
 	proto.RegisterType((*CheckAndMutateRowRequest)(nil), "google.bigtable.v1.CheckAndMutateRowRequest")
 	proto.RegisterType((*CheckAndMutateRowResponse)(nil), "google.bigtable.v1.CheckAndMutateRowResponse")
 	proto.RegisterType((*ReadModifyWriteRowRequest)(nil), "google.bigtable.v1.ReadModifyWriteRowRequest")
 }
 
 var fileDescriptor0 = []byte{
-	// 574 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x9c, 0x54, 0xd1, 0x6e, 0x12, 0x41,
-	0x14, 0x75, 0x5d, 0xd9, 0xd2, 0x5b, 0x52, 0xda, 0xb1, 0x36, 0x5b, 0x52, 0x8d, 0xd9, 0x17, 0x4d,
-	0x13, 0x97, 0x14, 0xb5, 0x1a, 0x1f, 0x4c, 0xa4, 0x49, 0xd3, 0xc4, 0x90, 0x34, 0xf4, 0xa1, 0x8f,
-	0x64, 0xd8, 0xbd, 0x2c, 0x13, 0x66, 0x77, 0x70, 0x67, 0x96, 0xca, 0x5f, 0xfa, 0x01, 0xbe, 0xfa,
-	0x1f, 0xce, 0x0c, 0x4b, 0x69, 0x10, 0x2a, 0xfa, 0x40, 0x02, 0x67, 0xce, 0xbd, 0xf7, 0x9c, 0x3b,
-	0x87, 0x81, 0x9b, 0x44, 0x88, 0x84, 0x63, 0x98, 0x08, 0x4e, 0xb3, 0x24, 0x14, 0x79, 0xd2, 0x8c,
-	0xb8, 0x28, 0xe2, 0x66, 0x9f, 0x25, 0x8a, 0xf6, 0x39, 0x36, 0x59, 0xa6, 0x30, 0xcf, 0x28, 0x6f,
-	0x4a, 0xcc, 0x27, 0x2c, 0xc2, 0xde, 0x38, 0x17, 0x4a, 0xdc, 0x9d, 0xf7, 0xe6, 0x70, 0x8a, 0x52,
-	0xd2, 0x04, 0x65, 0x68, 0xcf, 0x09, 0x29, 0x1b, 0xcf, 0x79, 0xe1, 0xe4, 0xb4, 0x71, 0xb9, 0xf9,
-	0xb0, 0x98, 0x2a, 0xba, 0x3c, 0xc9, 0x60, 0xb3, 0xee, 0xc1, 0x4f, 0x07, 0xea, 0x5d, 0xa4, 0x71,
-	0x57, 0xdc, 0xca, 0x2e, 0x7e, 0x2b, 0x50, 0x2a, 0x42, 0x00, 0x66, 0xbc, 0x8c, 0xa6, 0xe8, 0x3b,
-	0x2f, 0x9d, 0xd7, 0xdb, 0x64, 0x1f, 0xb6, 0x72, 0x71, 0xdb, 0x1b, 0xe1, 0xd4, 0x7f, 0xac, 0x81,
-	0xda, 0xe5, 0x23, 0x72, 0x0a, 0xdb, 0x06, 0xca, 0xb5, 0x02, 0xf4, 0x5d, 0x0d, 0xee, 0xb4, 0x8e,
-	0xc3, 0x3f, 0xc5, 0x86, 0xba, 0x75, 0xd7, 0x70, 0x74, 0xc9, 0x1b, 0xf0, 0x06, 0x8c, 0x6b, 0x65,
-	0x7e, 0xc5, 0xf2, 0x9f, 0xaf, 0xe1, 0x5f, 0x58, 0x12, 0x79, 0x01, 0x87, 0x94, 0x73, 0x33, 0x43,
-	0x7f, 0xac, 0x23, 0x8e, 0x74, 0xc2, 0xb2, 0xc4, 0xf7, 0x74, 0x79, 0x95, 0x1c, 0xc2, 0x6e, 0x56,
-	0xa4, 0xe6, 0x54, 0xf6, 0x38, 0x4b, 0x99, 0xf2, 0xb7, 0x34, 0xee, 0xb6, 0xab, 0xe0, 0x29, 0x9a,
-	0x27, 0xa8, 0x82, 0x1f, 0x0e, 0xec, 0x2d, 0xec, 0xc9, 0xb1, 0xc8, 0x24, 0x92, 0xfa, 0xc2, 0x8b,
-	0x31, 0x57, 0x23, 0x9f, 0xc0, 0x8b, 0x86, 0x45, 0x36, 0x92, 0xda, 0x9b, 0xab, 0x65, 0x9d, 0xac,
-	0x94, 0xb5, 0xd4, 0x26, 0x3c, 0x37, 0x25, 0x0d, 0x01, 0x15, 0xfb, 0x85, 0xb4, 0xa0, 0x66, 0xba,
-	0x46, 0x42, 0xeb, 0xcc, 0x94, 0xb4, 0xad, 0x77, 0x5a, 0x8d, 0x55, 0xad, 0x2e, 0x68, 0xca, 0xf8,
-	0x54, 0xef, 0xe3, 0xa9, 0x5e, 0x21, 0x4a, 0x54, 0xc6, 0x82, 0xdd, 0x6b, 0x55, 0x83, 0x07, 0x00,
-	0x91, 0x48, 0xb5, 0x1b, 0x8b, 0xba, 0x33, 0xb4, 0xbd, 0x05, 0x15, 0xab, 0x31, 0x38, 0x81, 0x83,
-	0x6b, 0x9a, 0x8e, 0x39, 0x6a, 0x31, 0x5f, 0x71, 0xfa, 0xd0, 0xad, 0x05, 0x9f, 0xe1, 0xd9, 0x12,
-	0x77, 0xdd, 0x0a, 0x0e, 0xa0, 0x26, 0x06, 0x03, 0x23, 0xa5, 0x3f, 0x55, 0x28, 0xad, 0x18, 0x37,
-	0x18, 0xc2, 0x5e, 0xa7, 0x50, 0x54, 0x99, 0xfa, 0x87, 0xd2, 0x51, 0x5f, 0x4a, 0x07, 0x69, 0xc2,
-	0x76, 0x6a, 0x0a, 0x99, 0x9e, 0xa6, 0x2d, 0xb8, 0xeb, 0xb2, 0xd1, 0x29, 0x49, 0xc1, 0x2f, 0x07,
-	0xfc, 0xf3, 0x21, 0x46, 0xa3, 0x2f, 0x59, 0xfc, 0x7f, 0x23, 0x3f, 0xc0, 0xde, 0x38, 0xc7, 0x98,
-	0x45, 0xba, 0xb6, 0x57, 0xa6, 0xcc, 0xdb, 0x24, 0x65, 0xef, 0x60, 0x57, 0xe5, 0x85, 0xfe, 0xdf,
-	0xdd, 0x09, 0x7e, 0xf2, 0x77, 0xc1, 0xe4, 0x3d, 0xd4, 0x07, 0x94, 0xcb, 0xfb, 0x65, 0x95, 0x0d,
-	0x7c, 0x9e, 0xc1, 0xd1, 0x0a, 0x9b, 0xe5, 0xad, 0x1c, 0xc1, 0xfe, 0xc2, 0x42, 0x4a, 0x55, 0x34,
-	0xc4, 0xd8, 0xda, 0xad, 0x06, 0xdf, 0xe1, 0xc8, 0x04, 0xb0, 0x23, 0x62, 0x36, 0x98, 0xde, 0xe4,
-	0xec, 0xdf, 0xf7, 0x73, 0x06, 0x95, 0xbc, 0xe0, 0x38, 0xbf, 0x8e, 0x57, 0xeb, 0x32, 0x7e, 0x7f,
-	0x84, 0xe6, 0xb7, 0x3f, 0xc2, 0xa1, 0x8e, 0xe3, 0x0a, 0x76, 0xfb, 0xb8, 0x5d, 0xfe, 0xb8, 0x9e,
-	0xbd, 0x5c, 0x9d, 0xf2, 0xe1, 0xba, 0x32, 0x2f, 0xcb, 0x95, 0xd3, 0xf7, 0xec, 0x13, 0xf3, 0xf6,
-	0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0xef, 0x18, 0xfb, 0x81, 0x1b, 0x05, 0x00, 0x00,
+	// 789 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x55, 0xdf, 0x8b, 0x23, 0x45,
+	0x10, 0xde, 0x49, 0x4c, 0x36, 0xa9, 0x8d, 0xbb, 0x7b, 0xcd, 0x79, 0x37, 0x1b, 0x6e, 0x31, 0x0e,
+	0x82, 0xc1, 0x83, 0x09, 0x9e, 0x2e, 0x88, 0x22, 0x62, 0xe2, 0x9e, 0x11, 0x8d, 0x1c, 0x9d, 0x87,
+	0x7b, 0x11, 0x86, 0xce, 0xa4, 0x32, 0x3b, 0xec, 0x4c, 0x77, 0xec, 0xee, 0xd9, 0x21, 0xcf, 0x82,
+	0xef, 0xfa, 0x57, 0xf8, 0x1f, 0xf9, 0xe2, 0x1f, 0x23, 0xdd, 0x33, 0xf9, 0x61, 0x4c, 0x70, 0x0e,
+	0xee, 0x2d, 0xf3, 0x55, 0x7d, 0x5f, 0x55, 0x7d, 0x5d, 0xe9, 0x86, 0xd7, 0x91, 0x10, 0x51, 0x82,
+	0x7e, 0x24, 0x12, 0xc6, 0x23, 0x5f, 0xc8, 0x68, 0x10, 0x26, 0x22, 0x9b, 0x0f, 0x66, 0x71, 0xa4,
+	0xd9, 0x2c, 0xc1, 0x41, 0xcc, 0x35, 0x4a, 0xce, 0x92, 0x81, 0x42, 0xf9, 0x10, 0x87, 0x18, 0x2c,
+	0xa5, 0xd0, 0x62, 0x13, 0x0f, 0xd6, 0x70, 0x8a, 0x4a, 0xb1, 0x08, 0x95, 0x6f, 0xe3, 0x84, 0x94,
+	0xc2, 0xeb, 0x3c, 0xff, 0xe1, 0x93, 0xee, 0xb8, 0x7a, 0xb1, 0x39, 0xd3, 0x6c, 0xbf, 0x92, 0xc1,
+	0x0a, 0xf5, 0xee, 0x77, 0xd5, 0x95, 0xe4, 0x32, 0x0c, 0x94, 0x66, 0x3a, 0x53, 0xa5, 0x5e, 0xf1,
+	0x51, 0x08, 0x79, 0x7f, 0xd7, 0xe0, 0x82, 0x22, 0x9b, 0x53, 0x91, 0x2b, 0x8a, 0xbf, 0x64, 0xa8,
+	0x34, 0xb9, 0x06, 0x28, 0x0a, 0x72, 0x96, 0xa2, 0xeb, 0xf4, 0x9c, 0x7e, 0x9b, 0xb6, 0x2d, 0xf2,
+	0x13, 0x4b, 0x91, 0x5c, 0xc1, 0xa9, 0x14, 0x79, 0x70, 0x8f, 0x2b, 0xb7, 0xd6, 0x73, 0xfa, 0x9d,
+	0xf1, 0x09, 0x6d, 0x4a, 0x91, 0xff, 0x80, 0x2b, 0xf2, 0x25, 0xb4, 0x4d, 0x48, 0x32, 0x1e, 0xa1,
+	0x5b, 0xef, 0x39, 0xfd, 0xb3, 0x17, 0xcf, 0xfc, 0xff, 0x1a, 0xe1, 0x53, 0x91, 0x53, 0x93, 0x33,
+	0x3e, 0xa1, 0x2d, 0x59, 0xfe, 0x26, 0x37, 0x85, 0xae, 0x42, 0xed, 0xb6, 0x2c, 0xb5, 0x7b, 0x84,
+	0x3a, 0x45, 0x5d, 0xd6, 0x9c, 0xa2, 0x26, 0x37, 0xd0, 0x5c, 0xc4, 0x89, 0x46, 0xe9, 0x36, 0x2c,
+	0xeb, 0xfa, 0x08, 0xeb, 0xa5, 0x4d, 0xa2, 0x65, 0x32, 0xf9, 0x0c, 0x9e, 0xb0, 0x24, 0x31, 0xcd,
+	0x8a, 0x3c, 0xb0, 0x66, 0x25, 0xc8, 0x1e, 0x62, 0x1e, 0xb9, 0xcd, 0x9e, 0xd3, 0x6f, 0xd1, 0xc7,
+	0x36, 0x4a, 0x45, 0xfe, 0xfd, 0x4e, 0x8c, 0x7c, 0x08, 0xe7, 0x3c, 0x4b, 0x0d, 0x47, 0x05, 0x49,
+	0x9c, 0xc6, 0xda, 0x3d, 0xed, 0x39, 0xfd, 0x3a, 0xed, 0xf0, 0x2c, 0x35, 0x16, 0xfe, 0x68, 0xb0,
+	0x61, 0x0b, 0x9a, 0x9a, 0xc9, 0x08, 0xb5, 0xf7, 0x6b, 0x0d, 0x2e, 0xb7, 0xf6, 0xaa, 0xa5, 0xe0,
+	0x0a, 0xc9, 0xd3, 0xad, 0x81, 0xc6, 0xdc, 0xce, 0xc6, 0xbe, 0x21, 0x34, 0xc3, 0xbb, 0x8c, 0xdf,
+	0x2b, 0xb7, 0xd6, 0xab, 0xf7, 0xcf, 0x5e, 0x7c, 0x7c, 0x70, 0x94, 0x3d, 0x39, 0x7f, 0x64, 0x28,
+	0xb4, 0x64, 0x76, 0x7f, 0x77, 0xa0, 0x61, 0x11, 0xf2, 0x35, 0x74, 0x4c, 0x99, 0x50, 0x70, 0x8d,
+	0x5c, 0x2b, 0x5b, 0xeb, 0x88, 0xa9, 0x2f, 0x59, 0x1a, 0x27, 0xab, 0xf1, 0x09, 0x3d, 0x93, 0x22,
+	0x1f, 0x95, 0x04, 0x72, 0x0d, 0x6d, 0x89, 0x0a, 0xb5, 0x19, 0xd7, 0x1e, 0x75, 0xcb, 0x9e, 0x97,
+	0x81, 0xa8, 0xc8, 0xc9, 0xfb, 0x00, 0xa1, 0x48, 0xd3, 0xb8, 0x88, 0xd7, 0xcb, 0x78, 0xbb, 0xc0,
+	0xa8, 0xc8, 0x87, 0xa7, 0xd0, 0xb0, 0x4d, 0x79, 0x37, 0xf0, 0x78, 0xca, 0xd2, 0x65, 0x82, 0xd4,
+	0xce, 0x59, 0x71, 0xd1, 0xbc, 0x29, 0xbc, 0xb7, 0x47, 0xfb, 0x3f, 0x03, 0x3f, 0x80, 0x8e, 0x58,
+	0x2c, 0x4c, 0xcb, 0xb3, 0x95, 0x46, 0x65, 0x9b, 0xae, 0xd3, 0xb3, 0x02, 0x1b, 0x1a, 0xc8, 0xfb,
+	0xcd, 0x81, 0xcb, 0x49, 0xa6, 0x99, 0x36, 0xaa, 0x15, 0x37, 0xfe, 0xe9, 0xde, 0xc6, 0x6f, 0xea,
+	0x7d, 0x01, 0xed, 0xd4, 0x68, 0xc5, 0x82, 0x2b, 0xb7, 0x6e, 0xcf, 0xec, 0xe0, 0xbe, 0x4f, 0xca,
+	0x24, 0xba, 0x4d, 0xf7, 0xfe, 0x72, 0xe0, 0xd1, 0xa6, 0x91, 0xaa, 0xff, 0xbd, 0x5b, 0x38, 0x45,
+	0xae, 0x65, 0x8c, 0xeb, 0x15, 0x79, 0x7e, 0xb4, 0xdc, 0xae, 0xac, 0x7f, 0xcb, 0xb5, 0x5c, 0xd1,
+	0x35, 0xb7, 0xfb, 0x33, 0x34, 0x2c, 0x72, 0xdc, 0xc9, 0x7f, 0x4d, 0x56, 0x7b, 0xb3, 0xc9, 0xbe,
+	0x05, 0xb2, 0xdb, 0x41, 0x79, 0x68, 0x3e, 0xb4, 0x8a, 0x9b, 0x07, 0xcd, 0x2a, 0x1a, 0x41, 0xb2,
+	0x16, 0x94, 0xcb, 0xd0, 0x9f, 0xda, 0x18, 0xdd, 0xe4, 0x78, 0x7f, 0xd6, 0xc0, 0x1d, 0xdd, 0x61,
+	0x78, 0xff, 0x0d, 0x9f, 0xbf, 0xb5, 0x03, 0x1b, 0xc3, 0xe5, 0x52, 0xe2, 0x3c, 0x0e, 0x99, 0xc6,
+	0xa0, 0xbc, 0x36, 0x9a, 0x55, 0xae, 0x8d, 0x8b, 0x0d, 0xad, 0x00, 0xc8, 0x08, 0xce, 0xb5, 0xcc,
+	0x30, 0xd8, 0xba, 0xf4, 0x4e, 0x05, 0x97, 0xde, 0x35, 0x9c, 0xf5, 0x97, 0x22, 0xb7, 0x70, 0xb1,
+	0x60, 0x89, 0xda, 0x55, 0x69, 0x54, 0x50, 0x39, 0xb7, 0xa4, 0x8d, 0x8c, 0x37, 0x86, 0xab, 0x03,
+	0x4e, 0x95, 0xbe, 0x3f, 0x87, 0x47, 0xdb, 0x91, 0x53, 0xa6, 0xc3, 0x3b, 0x9c, 0x5b, 0xc7, 0x5a,
+	0x74, 0xeb, 0xc5, 0xa4, 0xc0, 0xbd, 0x3f, 0x1c, 0xb8, 0x32, 0x17, 0xcc, 0x44, 0xcc, 0xe3, 0xc5,
+	0xea, 0xb5, 0x8c, 0xdf, 0x8a, 0xeb, 0x5f, 0x41, 0x43, 0x66, 0x09, 0xae, 0xff, 0x22, 0x1f, 0x1d,
+	0xbb, 0xd6, 0x76, 0xab, 0x66, 0x09, 0xd2, 0x82, 0x35, 0xfc, 0x1c, 0x9e, 0x84, 0x22, 0x3d, 0x40,
+	0x1a, 0x3e, 0x1b, 0x96, 0x1f, 0xd3, 0xe2, 0x11, 0x9e, 0x94, 0x6f, 0xf0, 0x2b, 0xf3, 0xb6, 0xbd,
+	0x72, 0x66, 0x4d, 0xfb, 0xc8, 0x7d, 0xfa, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9a, 0xba, 0xcf,
+	0xcf, 0xe6, 0x07, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.proto b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.proto
index 661310a..96046c4 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/service_proto/bigtable_service_messages.proto
@@ -17,6 +17,7 @@
 package google.bigtable.v1;
 
 import "google.golang.org/cloud/bigtable/internal/data_proto/bigtable_data.proto";
+import "google.golang.org/cloud/bigtable/internal/rpc_status_proto/status.proto";
 
 option java_multiple_files = true;
 option java_outer_classname = "BigtableServiceMessagesProto";
@@ -35,6 +36,11 @@
 
     // A range of rows from which to read.
     RowRange row_range = 3;
+
+    // A set of rows from which to read. Entries need not be in order, and will
+    // be deduplicated before reading.
+    // The total serialized size of the set must not exceed 1MB.
+    RowSet row_set = 8;
   }
 
   // The filter to apply to the contents of the specified row(s). If unset,
@@ -124,6 +130,40 @@
   repeated Mutation mutations = 3;
 }
 
+// Request message for BigtableService.MutateRows.
+message MutateRowsRequest {
+  message Entry {
+    // The key of the row to which the `mutations` should be applied.
+    bytes row_key = 1;
+
+    // Changes to be atomically applied to the specified row. Mutations are
+    // applied in order, meaning that earlier mutations can be masked by
+    // later ones.
+    // At least one mutation must be specified.
+    repeated Mutation mutations = 2;
+  }
+
+  // The unique name of the table to which the mutations should be applied.
+  string table_name = 1;
+
+  // The row keys/mutations to be applied in bulk.
+  // Each entry is applied as an atomic mutation, but the entries may be
+  // applied in arbitrary order (even between entries for the same row).
+  // At least one entry must be specified, and in total the entries may
+  // contain at most 100000 mutations.
+  repeated Entry entries = 2;
+}
+
+// Response message for BigtableService.MutateRows.
+message MutateRowsResponse {
+  // The results for each Entry from the request, presented in the order
+  // in which the entries were originally given.
+  // Depending on how requests are batched during execution, it is possible
+  // for one Entry to fail due to an error with another Entry. In the event
+  // that this occurs, the same error will be reported for both entries.
+  repeated google.rpc.Status statuses = 1;
+}
+
 // Request message for BigtableService.CheckAndMutateRowRequest
 message CheckAndMutateRowRequest {
   // The unique name of the table to which the conditional mutation should be
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.pb.go
index 48f130e..d41a56f 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.pb.go
@@ -18,13 +18,17 @@
 import proto "github.com/golang/protobuf/proto"
 import fmt "fmt"
 import math "math"
-import google_protobuf "google.golang.org/cloud/bigtable/internal/duration_proto"
+import google_protobuf "github.com/golang/protobuf/ptypes/duration"
 
 // Reference imports to suppress errors if they are not otherwise used.
 var _ = proto.Marshal
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 type Table_TimestampGranularity int32
 
 const (
@@ -52,7 +56,7 @@
 	// <cluster_name>/tables/[_a-zA-Z0-9][-_.a-zA-Z0-9]*
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The column families configured for this table, mapped by column family id.
-	ColumnFamilies map[string]*ColumnFamily `protobuf:"bytes,3,rep,name=column_families" json:"column_families,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
+	ColumnFamilies map[string]*ColumnFamily `protobuf:"bytes,3,rep,name=column_families,json=columnFamilies" json:"column_families,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
 	// The granularity (e.g. MILLIS, MICROS) at which timestamps are stored in
 	// this table. Timestamps not matching the granularity will be rejected.
 	// Cannot be changed once the table is created.
@@ -102,7 +106,7 @@
 	// Garbage collection executes opportunistically in the background, and so
 	// it's possible for reads to return a cell even if it matches the active GC
 	// expression for its family.
-	GcExpression string `protobuf:"bytes,2,opt,name=gc_expression" json:"gc_expression,omitempty"`
+	GcExpression string `protobuf:"bytes,2,opt,name=gc_expression,json=gcExpression" json:"gc_expression,omitempty"`
 	// Garbage collection rule specified as a protobuf.
 	// Supersedes `gc_expression`.
 	// Must serialize to at most 500 bytes.
@@ -110,7 +114,7 @@
 	// NOTE: Garbage collection executes opportunistically in the background, and
 	// so it's possible for reads to return a cell even if it matches the active
 	// GC expression for its family.
-	GcRule *GcRule `protobuf:"bytes,3,opt,name=gc_rule" json:"gc_rule,omitempty"`
+	GcRule *GcRule `protobuf:"bytes,3,opt,name=gc_rule,json=gcRule" json:"gc_rule,omitempty"`
 }
 
 func (m *ColumnFamily) Reset()                    { *m = ColumnFamily{} }
@@ -145,10 +149,10 @@
 }
 
 type GcRule_MaxNumVersions struct {
-	MaxNumVersions int32 `protobuf:"varint,1,opt,name=max_num_versions,oneof"`
+	MaxNumVersions int32 `protobuf:"varint,1,opt,name=max_num_versions,json=maxNumVersions,oneof"`
 }
 type GcRule_MaxAge struct {
-	MaxAge *google_protobuf.Duration `protobuf:"bytes,2,opt,name=max_age,oneof"`
+	MaxAge *google_protobuf.Duration `protobuf:"bytes,2,opt,name=max_age,json=maxAge,oneof"`
 }
 type GcRule_Intersection_ struct {
 	Intersection *GcRule_Intersection `protobuf:"bytes,3,opt,name=intersection,oneof"`
@@ -350,35 +354,38 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 467 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x9c, 0x93, 0x5d, 0x6f, 0xd3, 0x30,
-	0x14, 0x86, 0xd7, 0x8f, 0x74, 0xda, 0x69, 0x81, 0xc9, 0x7c, 0xa8, 0xe4, 0x02, 0x4d, 0xb9, 0x40,
-	0xbb, 0x98, 0x5c, 0xd1, 0x09, 0x01, 0x43, 0xec, 0xa2, 0x6c, 0x94, 0x8a, 0x21, 0xa6, 0x51, 0xae,
-	0x23, 0x37, 0xf5, 0xac, 0x08, 0x7f, 0x54, 0x4e, 0x5c, 0xad, 0xbf, 0x8f, 0xdf, 0xc3, 0x7f, 0xc0,
-	0x76, 0x5d, 0x16, 0xa1, 0x89, 0x46, 0xbb, 0xb3, 0x7d, 0xce, 0xfb, 0xbc, 0xc7, 0x6f, 0x1c, 0x98,
-	0x32, 0xa5, 0x18, 0xa7, 0x98, 0x29, 0x4e, 0x24, 0xc3, 0x4a, 0xb3, 0x41, 0xc6, 0x95, 0x99, 0x0f,
-	0x66, 0x39, 0x2b, 0xc9, 0x8c, 0xd3, 0x41, 0x2e, 0x4b, 0xaa, 0x25, 0xe1, 0x03, 0xbf, 0x4d, 0xe7,
-	0xa4, 0x24, 0xe9, 0x42, 0xab, 0x52, 0xfd, 0x6d, 0x49, 0x6f, 0x2b, 0xd8, 0x57, 0xd0, 0x8b, 0x40,
-	0xdd, 0x74, 0x60, 0x32, 0x17, 0xb9, 0xc4, 0xeb, 0xf5, 0xf2, 0x55, 0x3c, 0xae, 0xef, 0x3a, 0x37,
-	0x9a, 0x94, 0xb9, 0x92, 0xc1, 0x73, 0xb3, 0x5d, 0x1b, 0x25, 0xbf, 0x9a, 0x10, 0x4d, 0x9d, 0x00,
-	0xf5, 0xa0, 0x2d, 0x89, 0xa0, 0xfd, 0xc6, 0x41, 0xe3, 0x70, 0x0f, 0x5d, 0xc1, 0xa3, 0x4c, 0x71,
-	0x23, 0x64, 0x7a, 0x4d, 0x44, 0xce, 0x73, 0x5a, 0xf4, 0x5b, 0x07, 0xad, 0xc3, 0xee, 0xf0, 0x1d,
-	0xfe, 0xff, 0x68, 0xd8, 0xd3, 0xf0, 0x47, 0x2f, 0xfe, 0x14, 0xb4, 0xe7, 0xb2, 0xd4, 0x2b, 0xf4,
-	0x0d, 0xba, 0x4c, 0x13, 0x69, 0x38, 0xd1, 0x79, 0xb9, 0xea, 0xb7, 0xad, 0xd1, 0xc3, 0xe1, 0x49,
-	0x3d, 0xde, 0x34, 0x17, 0xb4, 0x28, 0x89, 0x58, 0x8c, 0x6f, 0x09, 0x71, 0x06, 0x8f, 0xef, 0xf2,
-	0xe9, 0x42, 0xeb, 0x27, 0x5d, 0x85, 0x8b, 0xbc, 0x87, 0x68, 0x49, 0xb8, 0xa1, 0xfd, 0xa6, 0xdd,
-	0x76, 0x87, 0x47, 0xdb, 0xec, 0x2a, 0xc0, 0xd5, 0x49, 0xf3, 0x6d, 0x23, 0x49, 0xe0, 0xc9, 0x5d,
-	0xe6, 0x08, 0xa0, 0xf3, 0x75, 0x72, 0x71, 0x31, 0xf9, 0xbe, 0xbf, 0x93, 0x70, 0xe8, 0x55, 0x75,
-	0xff, 0x64, 0xf9, 0x14, 0x1e, 0xb0, 0x2c, 0xa5, 0x37, 0x0b, 0x4d, 0x8b, 0xc2, 0x46, 0xef, 0x47,
-	0xd9, 0x43, 0x6f, 0x60, 0xd7, 0x1e, 0x6b, 0xc3, 0xa9, 0x8d, 0xd6, 0xcd, 0xf6, 0x72, 0xdb, 0x6c,
-	0xe3, 0xec, 0xca, 0x76, 0x27, 0xbf, 0x9b, 0xd0, 0x59, 0x2f, 0x51, 0x0c, 0xfb, 0x82, 0xdc, 0xa4,
-	0xd2, 0x88, 0x74, 0x49, 0xb5, 0x83, 0x17, 0xde, 0x34, 0xfa, 0xbc, 0x83, 0x8e, 0x60, 0xd7, 0xd5,
-	0x08, 0xdb, 0xdc, 0xfd, 0xf9, 0x86, 0xef, 0x3f, 0xfd, 0xcc, 0x5c, 0xe3, 0xb3, 0xf0, 0x18, 0x6c,
-	0xf7, 0x17, 0xe8, 0xf9, 0x27, 0x53, 0xd0, 0xcc, 0x9d, 0x84, 0x91, 0x8e, 0xeb, 0x8d, 0x84, 0x27,
-	0x15, 0xa9, 0x85, 0x7d, 0x80, 0xc8, 0x48, 0x47, 0x69, 0xd7, 0x0b, 0x3d, 0x50, 0x7e, 0x48, 0x2f,
-	0x8f, 0xcf, 0xa1, 0x57, 0x05, 0xa2, 0xd7, 0x10, 0xb9, 0x98, 0xdc, 0xd5, 0x5a, 0xf5, 0x73, 0x8a,
-	0x4f, 0x21, 0xf2, 0xc4, 0x7b, 0xea, 0x47, 0x1d, 0x68, 0x3b, 0xd9, 0xe8, 0x14, 0x92, 0x4c, 0x89,
-	0x2d, 0xa2, 0xd1, 0xb3, 0x51, 0x28, 0xf8, 0x07, 0x7b, 0x66, 0xff, 0xe5, 0x4b, 0x17, 0xf3, 0x65,
-	0x63, 0xd6, 0xf1, 0x79, 0x1f, 0xff, 0x09, 0x00, 0x00, 0xff, 0xff, 0x76, 0x63, 0xd0, 0xf1, 0x2b,
-	0x04, 0x00, 0x00,
+	// 519 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x93, 0x6f, 0x8b, 0xd3, 0x40,
+	0x10, 0xc6, 0x9b, 0xa6, 0xc9, 0x71, 0xd3, 0x5a, 0xcb, 0x2a, 0x52, 0xfb, 0x42, 0x4a, 0x04, 0x29,
+	0x22, 0x09, 0xf6, 0x7c, 0xa1, 0x87, 0x28, 0xd6, 0xd6, 0x6b, 0xa1, 0xca, 0x11, 0xab, 0x20, 0x08,
+	0x61, 0x9b, 0xee, 0x2d, 0xc1, 0xfd, 0x53, 0x92, 0x6c, 0x69, 0xbf, 0x81, 0x1f, 0xc5, 0xcf, 0xe7,
+	0x27, 0x90, 0xdd, 0xa4, 0x77, 0x3d, 0x28, 0xb6, 0xe2, 0xab, 0x4c, 0x66, 0xe6, 0xf9, 0xed, 0x93,
+	0xd9, 0x09, 0xcc, 0xa8, 0x94, 0x94, 0x11, 0x9f, 0x4a, 0x86, 0x05, 0xf5, 0x65, 0x4a, 0x83, 0x98,
+	0x49, 0xb5, 0x08, 0xe6, 0x09, 0xcd, 0xf1, 0x9c, 0x91, 0x20, 0x11, 0x39, 0x49, 0x05, 0x66, 0x81,
+	0x79, 0x8d, 0x16, 0x38, 0xc7, 0xd1, 0x32, 0x95, 0xb9, 0xbc, 0x6e, 0x89, 0x6e, 0x2a, 0xbe, 0xa9,
+	0xa0, 0x47, 0x25, 0x75, 0xdb, 0xe1, 0xe3, 0x05, 0x4f, 0x84, 0x5f, 0xc4, 0xab, 0xe7, 0x9d, 0xb2,
+	0x1e, 0x98, 0xee, 0xb9, 0xba, 0x0a, 0x16, 0x2a, 0xc5, 0x79, 0x22, 0x45, 0xa1, 0xf7, 0x7e, 0x57,
+	0xc1, 0x99, 0xe9, 0x66, 0x84, 0xa0, 0x26, 0x30, 0x27, 0x6d, 0xab, 0x6b, 0xf5, 0x4e, 0x43, 0x13,
+	0xa3, 0x39, 0xdc, 0x8d, 0x25, 0x53, 0x5c, 0x44, 0x57, 0x98, 0x27, 0x2c, 0x21, 0x59, 0xdb, 0xee,
+	0xda, 0xbd, 0x7a, 0xff, 0x95, 0xff, 0xf7, 0x73, 0x7d, 0xc3, 0xf4, 0xdf, 0x1b, 0xf1, 0x87, 0x52,
+	0x3b, 0x12, 0x79, 0xba, 0x09, 0x9b, 0xf1, 0xad, 0x24, 0xfa, 0x0e, 0x75, 0x9a, 0x62, 0xa1, 0x18,
+	0x4e, 0x93, 0x7c, 0xd3, 0xae, 0x75, 0xad, 0x5e, 0xb3, 0x7f, 0x7e, 0x1c, 0x7f, 0x96, 0x70, 0x92,
+	0xe5, 0x98, 0x2f, 0x2f, 0x6e, 0x08, 0xe1, 0x2e, 0xae, 0x23, 0xe1, 0xde, 0x1e, 0x13, 0xa8, 0x05,
+	0xf6, 0x0f, 0xb2, 0x29, 0xbf, 0x55, 0x87, 0x68, 0x00, 0xce, 0x0a, 0x33, 0x45, 0xda, 0xd5, 0xae,
+	0xd5, 0xab, 0xf7, 0x9f, 0x1d, 0x32, 0xb0, 0x43, 0xdd, 0x84, 0x85, 0xf4, 0xbc, 0xfa, 0xd2, 0xf2,
+	0x3c, 0xb8, 0xbf, 0xcf, 0x15, 0x02, 0x70, 0x3f, 0x4e, 0xa6, 0xd3, 0xc9, 0xe7, 0x56, 0xc5, 0xfb,
+	0x69, 0x41, 0x63, 0x57, 0xbf, 0x77, 0xf6, 0x8f, 0xe1, 0x0e, 0x8d, 0x23, 0xb2, 0x5e, 0xa6, 0x24,
+	0xcb, 0x12, 0x29, 0x8c, 0xb1, 0xd3, 0xb0, 0x41, 0xe3, 0xd1, 0x75, 0x0e, 0xbd, 0x85, 0x13, 0x1a,
+	0x47, 0xa9, 0x62, 0xa4, 0x6d, 0x1b, 0xdf, 0x4f, 0x0e, 0xf9, 0xbe, 0x88, 0x43, 0xc5, 0x48, 0xe8,
+	0x52, 0xf3, 0xf4, 0x7e, 0xd9, 0xe0, 0x16, 0x29, 0xf4, 0x14, 0x5a, 0x1c, 0xaf, 0x23, 0xa1, 0x78,
+	0xb4, 0x22, 0xa9, 0xc6, 0x67, 0xc6, 0x90, 0x33, 0xae, 0x84, 0x4d, 0x8e, 0xd7, 0x9f, 0x14, 0xff,
+	0x5a, 0xe6, 0xd1, 0x0b, 0x38, 0xd1, 0xbd, 0x98, 0x6e, 0xe7, 0xf5, 0x70, 0x7b, 0xee, 0x76, 0xd1,
+	0xfc, 0x61, 0xb9, 0x68, 0xe3, 0x4a, 0xe8, 0x72, 0xbc, 0x7e, 0x47, 0x09, 0xfa, 0x06, 0x0d, 0xb3,
+	0xe3, 0x19, 0x89, 0x75, 0xa5, 0xb4, 0x7c, 0x76, 0x9c, 0x65, 0x7f, 0xb2, 0x23, 0x1d, 0x57, 0xc2,
+	0x5b, 0x28, 0x34, 0x04, 0x47, 0x09, 0xcd, 0xac, 0x1d, 0x77, 0x7d, 0x25, 0xf3, 0x8b, 0x28, 0x60,
+	0x85, 0xb8, 0x33, 0x85, 0xc6, 0xee, 0x29, 0xe8, 0x35, 0x38, 0x7a, 0xb6, 0x7a, 0x0e, 0xf6, 0x3f,
+	0x0c, 0xb7, 0x10, 0x75, 0x46, 0xe0, 0x18, 0xfe, 0xff, 0x61, 0x06, 0x2e, 0xd4, 0x74, 0x30, 0x78,
+	0x03, 0x5e, 0x2c, 0xf9, 0x01, 0xed, 0xe0, 0xc1, 0xa0, 0x2c, 0x98, 0x3f, 0x64, 0x88, 0x73, 0x7c,
+	0xa9, 0x6f, 0xe4, 0xd2, 0x9a, 0xbb, 0xe6, 0x6a, 0xce, 0xfe, 0x04, 0x00, 0x00, 0xff, 0xff, 0xa9,
+	0x9e, 0x64, 0x6f, 0x89, 0x04, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.proto b/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.proto
index a815152..e08fa44 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.proto
@@ -16,7 +16,7 @@
 
 package google.bigtable.admin.table.v1;
 
-import "google.golang.org/cloud/bigtable/internal/duration_proto/duration.proto";
+import "google/protobuf/duration.proto";
 
 option java_multiple_files = true;
 option java_outer_classname = "BigtableTableDataProto";
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.pb.go
index 018cb26..53d2f2b 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.pb.go
@@ -8,7 +8,7 @@
 import fmt "fmt"
 import math "math"
 import google_bigtable_admin_table_v11 "google.golang.org/cloud/bigtable/internal/table_data_proto"
-import google_protobuf1 "google.golang.org/cloud/bigtable/internal/empty"
+import google_protobuf1 "github.com/golang/protobuf/ptypes/empty"
 
 import (
 	context "golang.org/x/net/context"
@@ -24,6 +24,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for BigtableTableService service
 
 type BigtableTableServiceClient interface {
@@ -46,6 +50,8 @@
 	UpdateColumnFamily(ctx context.Context, in *google_bigtable_admin_table_v11.ColumnFamily, opts ...grpc.CallOption) (*google_bigtable_admin_table_v11.ColumnFamily, error)
 	// Permanently deletes a specified column family and all of its data.
 	DeleteColumnFamily(ctx context.Context, in *DeleteColumnFamilyRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error)
+	// Delete all rows in a table corresponding to a particular prefix
+	BulkDeleteRows(ctx context.Context, in *BulkDeleteRowsRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error)
 }
 
 type bigtableTableServiceClient struct {
@@ -128,6 +134,15 @@
 	return out, nil
 }
 
+func (c *bigtableTableServiceClient) BulkDeleteRows(ctx context.Context, in *BulkDeleteRowsRequest, opts ...grpc.CallOption) (*google_protobuf1.Empty, error) {
+	out := new(google_protobuf1.Empty)
+	err := grpc.Invoke(ctx, "/google.bigtable.admin.table.v1.BigtableTableService/BulkDeleteRows", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
 // Server API for BigtableTableService service
 
 type BigtableTableServiceServer interface {
@@ -150,106 +165,174 @@
 	UpdateColumnFamily(context.Context, *google_bigtable_admin_table_v11.ColumnFamily) (*google_bigtable_admin_table_v11.ColumnFamily, error)
 	// Permanently deletes a specified column family and all of its data.
 	DeleteColumnFamily(context.Context, *DeleteColumnFamilyRequest) (*google_protobuf1.Empty, error)
+	// Delete all rows in a table corresponding to a particular prefix
+	BulkDeleteRows(context.Context, *BulkDeleteRowsRequest) (*google_protobuf1.Empty, error)
 }
 
 func RegisterBigtableTableServiceServer(s *grpc.Server, srv BigtableTableServiceServer) {
 	s.RegisterService(&_BigtableTableService_serviceDesc, srv)
 }
 
-func _BigtableTableService_CreateTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_CreateTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(CreateTableRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).CreateTable(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).CreateTable(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/CreateTable",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).CreateTable(ctx, req.(*CreateTableRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_ListTables_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_ListTables_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(ListTablesRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).ListTables(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).ListTables(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/ListTables",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).ListTables(ctx, req.(*ListTablesRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_GetTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_GetTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(GetTableRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).GetTable(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).GetTable(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/GetTable",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).GetTable(ctx, req.(*GetTableRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_DeleteTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_DeleteTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(DeleteTableRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).DeleteTable(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).DeleteTable(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/DeleteTable",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).DeleteTable(ctx, req.(*DeleteTableRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_RenameTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_RenameTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(RenameTableRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).RenameTable(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).RenameTable(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/RenameTable",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).RenameTable(ctx, req.(*RenameTableRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_CreateColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_CreateColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(CreateColumnFamilyRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).CreateColumnFamily(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).CreateColumnFamily(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/CreateColumnFamily",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).CreateColumnFamily(ctx, req.(*CreateColumnFamilyRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_UpdateColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_UpdateColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(google_bigtable_admin_table_v11.ColumnFamily)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).UpdateColumnFamily(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).UpdateColumnFamily(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/UpdateColumnFamily",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).UpdateColumnFamily(ctx, req.(*google_bigtable_admin_table_v11.ColumnFamily))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _BigtableTableService_DeleteColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _BigtableTableService_DeleteColumnFamily_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(DeleteColumnFamilyRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(BigtableTableServiceServer).DeleteColumnFamily(ctx, in)
-	if err != nil {
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).DeleteColumnFamily(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/DeleteColumnFamily",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).DeleteColumnFamily(ctx, req.(*DeleteColumnFamilyRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _BigtableTableService_BulkDeleteRows_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(BulkDeleteRowsRequest)
+	if err := dec(in); err != nil {
 		return nil, err
 	}
-	return out, nil
+	if interceptor == nil {
+		return srv.(BigtableTableServiceServer).BulkDeleteRows(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.bigtable.admin.table.v1.BigtableTableService/BulkDeleteRows",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BigtableTableServiceServer).BulkDeleteRows(ctx, req.(*BulkDeleteRowsRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _BigtableTableService_serviceDesc = grpc.ServiceDesc{
@@ -288,33 +371,38 @@
 			MethodName: "DeleteColumnFamily",
 			Handler:    _BigtableTableService_DeleteColumnFamily_Handler,
 		},
+		{
+			MethodName: "BulkDeleteRows",
+			Handler:    _BigtableTableService_BulkDeleteRows_Handler,
+		},
 	},
 	Streams: []grpc.StreamDesc{},
 }
 
 var fileDescriptor1 = []byte{
-	// 353 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x93, 0xbd, 0x4f, 0xc3, 0x30,
-	0x10, 0xc5, 0x61, 0xa9, 0x90, 0xbb, 0x59, 0x88, 0x21, 0x03, 0x43, 0x25, 0x36, 0xe4, 0xa8, 0x65,
-	0x42, 0x6c, 0x29, 0x1f, 0x0b, 0x43, 0x55, 0xca, 0x02, 0x43, 0xe4, 0x24, 0x87, 0x65, 0xe4, 0x8f,
-	0x10, 0x3b, 0x95, 0x3a, 0xf1, 0x77, 0xb3, 0x41, 0xe2, 0x06, 0x02, 0x54, 0x38, 0x1e, 0x58, 0xac,
-	0xda, 0x7e, 0xef, 0xfd, 0x7c, 0x77, 0x0d, 0x7a, 0x60, 0x5a, 0x33, 0x01, 0x84, 0x69, 0x41, 0x15,
-	0x23, 0xba, 0x62, 0x71, 0x2e, 0x74, 0x5d, 0xc4, 0x19, 0x67, 0x96, 0x66, 0x02, 0x62, 0xae, 0x2c,
-	0x54, 0x8a, 0x8a, 0xb8, 0xdd, 0xa6, 0x06, 0xaa, 0x35, 0xcf, 0x21, 0x2d, 0x2b, 0x6d, 0xf5, 0xa7,
-	0x2a, 0xfd, 0x76, 0x49, 0xda, 0x4b, 0x7c, 0xbc, 0xcd, 0xee, 0x44, 0x84, 0x16, 0x92, 0x2b, 0xe2,
-	0x7e, 0xaf, 0xa7, 0xd1, 0x2a, 0x94, 0x5d, 0x50, 0x4b, 0x77, 0x83, 0x9b, 0x1b, 0x47, 0x8d, 0xf2,
-	0xff, 0xa8, 0x28, 0x95, 0x60, 0x0c, 0x65, 0x60, 0xb6, 0x90, 0x8b, 0xe1, 0x10, 0x90, 0xa5, 0xdd,
-	0xb8, 0xd5, 0x99, 0x67, 0x6f, 0x23, 0x74, 0x98, 0x6c, 0x75, 0xab, 0x66, 0xb9, 0x73, 0x10, 0xfc,
-	0x8c, 0xc6, 0xf3, 0x0a, 0xa8, 0x75, 0xa7, 0x78, 0x46, 0xfe, 0x6e, 0x20, 0xe9, 0x89, 0x97, 0xf0,
-	0x52, 0x83, 0xb1, 0xd1, 0x89, 0xcf, 0xd3, 0xaa, 0x27, 0x7b, 0xb8, 0x46, 0xe8, 0x96, 0x1b, 0xdb,
-	0x6e, 0x0d, 0x9e, 0xfa, 0x6c, 0x5f, 0xda, 0x8e, 0x34, 0x0b, 0xb1, 0x98, 0x52, 0x2b, 0xd3, 0x60,
-	0x0b, 0x74, 0x70, 0x03, 0xee, 0x18, 0xc7, 0xbe, 0x84, 0x4e, 0x19, 0x5c, 0xdc, 0x23, 0x1a, 0x5f,
-	0x82, 0x80, 0xc1, 0x8d, 0xec, 0x89, 0x3b, 0xd6, 0x51, 0xe7, 0x69, 0x67, 0x96, 0xd5, 0x4f, 0xe4,
-	0xaa, 0x19, 0xa1, 0x0b, 0x5f, 0x82, 0xa2, 0x72, 0x68, 0x78, 0x4f, 0xec, 0x0f, 0x7f, 0x45, 0xd8,
-	0x4d, 0x75, 0xae, 0x45, 0x2d, 0xd5, 0x35, 0x95, 0x5c, 0x6c, 0xf0, 0xf9, 0xb0, 0x7f, 0x42, 0xdf,
-	0xd3, 0xa1, 0x4e, 0xbd, 0xd6, 0x9e, 0xe9, 0xe3, 0x01, 0x15, 0xc2, 0xf7, 0x65, 0xf1, 0xf3, 0x01,
-	0x41, 0x29, 0xc1, 0x4c, 0x8e, 0xb0, 0x9b, 0x40, 0x58, 0xd1, 0xbf, 0x3d, 0xde, 0xfe, 0x26, 0x09,
-	0x9a, 0xe4, 0x5a, 0x7a, 0x92, 0x93, 0x68, 0xd7, 0xe7, 0x69, 0x16, 0x4d, 0xd8, 0x62, 0x3f, 0x1b,
-	0xb5, 0xa9, 0x67, 0xef, 0x01, 0x00, 0x00, 0xff, 0xff, 0x26, 0xcd, 0xc1, 0xb6, 0x3c, 0x05, 0x00,
-	0x00,
+	// 378 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xb4, 0x94, 0x3f, 0x4f, 0xeb, 0x30,
+	0x14, 0xc5, 0xfb, 0x96, 0xf7, 0x9e, 0x5c, 0xe9, 0x0d, 0xd6, 0x13, 0x43, 0x90, 0x18, 0x2a, 0xb1,
+	0x21, 0x47, 0x2d, 0x62, 0x60, 0x4d, 0xf9, 0xb3, 0x30, 0x54, 0xa5, 0x2c, 0x30, 0x44, 0x4e, 0x72,
+	0xb1, 0x0c, 0xfe, 0x13, 0x62, 0xa7, 0xa8, 0x13, 0x5f, 0x94, 0x0f, 0x83, 0x12, 0xd7, 0x90, 0x42,
+	0x85, 0x9b, 0x81, 0xa5, 0xaa, 0x7d, 0xcf, 0x39, 0xbf, 0xdc, 0x7b, 0xa3, 0xa0, 0x5b, 0xa6, 0x35,
+	0x13, 0x40, 0x98, 0x16, 0x54, 0x31, 0xa2, 0x2b, 0x16, 0xe7, 0x42, 0xd7, 0x45, 0x9c, 0x71, 0x66,
+	0x69, 0x26, 0x20, 0xe6, 0xca, 0x42, 0xa5, 0xa8, 0x88, 0xdb, 0x63, 0x6a, 0xa0, 0x5a, 0xf2, 0x1c,
+	0xd2, 0xb2, 0xd2, 0x56, 0xbf, 0xab, 0xd2, 0x8d, 0x22, 0x69, 0x8b, 0xf8, 0x60, 0x9d, 0xed, 0x45,
+	0x84, 0x16, 0x92, 0x2b, 0xe2, 0xfe, 0x2f, 0xc7, 0xd1, 0xa2, 0x2f, 0xbb, 0xa0, 0x96, 0x6e, 0x07,
+	0x37, 0x15, 0x47, 0x8d, 0xf2, 0x9f, 0xe8, 0x28, 0x95, 0x60, 0x0c, 0x65, 0x60, 0xd6, 0x90, 0x7d,
+	0x07, 0x89, 0xdb, 0x53, 0x56, 0xdf, 0xc7, 0x20, 0x4b, 0xbb, 0x72, 0xc5, 0xc9, 0xeb, 0x1f, 0xf4,
+	0x3f, 0x59, 0xc7, 0x2c, 0x9a, 0x9f, 0x6b, 0x17, 0x82, 0x1f, 0xd0, 0x70, 0x5a, 0x01, 0xb5, 0xee,
+	0x16, 0x4f, 0xc8, 0xf7, 0x03, 0x22, 0x1d, 0xf1, 0x1c, 0x9e, 0x6a, 0x30, 0x36, 0x3a, 0x0c, 0x79,
+	0x5a, 0xf5, 0x68, 0x80, 0x6b, 0x84, 0xae, 0xb8, 0xb1, 0xed, 0xd1, 0xe0, 0x71, 0xc8, 0xf6, 0xa1,
+	0xf5, 0xa4, 0x49, 0x1f, 0x8b, 0x29, 0xb5, 0x32, 0x0d, 0xb6, 0x40, 0x7f, 0x2f, 0xc1, 0x5d, 0xe3,
+	0x38, 0x94, 0xe0, 0x95, 0xbd, 0x9b, 0xbb, 0x43, 0xc3, 0x33, 0x10, 0xb0, 0xf3, 0x20, 0x3b, 0x62,
+	0xcf, 0xda, 0xf3, 0x1e, 0xbf, 0x42, 0x72, 0xde, 0xac, 0xd0, 0x85, 0xcf, 0x41, 0x51, 0xb9, 0x6b,
+	0x78, 0x47, 0x1c, 0x0e, 0x7f, 0x41, 0xd8, 0x6d, 0x75, 0xaa, 0x45, 0x2d, 0xd5, 0x05, 0x95, 0x5c,
+	0xac, 0xf0, 0xe9, 0x6e, 0x6f, 0x42, 0xd7, 0xe3, 0x51, 0x47, 0x41, 0x6b, 0xc7, 0x34, 0x1a, 0xe0,
+	0x0a, 0xe1, 0x9b, 0xb2, 0xf8, 0xfc, 0x00, 0xbd, 0x52, 0x7a, 0x33, 0x39, 0xc2, 0x6e, 0x03, 0xfd,
+	0x9a, 0xfe, 0xea, 0x09, 0xcf, 0x97, 0xa2, 0x7f, 0x49, 0x2d, 0x1e, 0x9d, 0x75, 0xae, 0x9f, 0x0d,
+	0x3e, 0x09, 0x61, 0x36, 0xf5, 0x41, 0x44, 0x92, 0xa0, 0x51, 0xae, 0x65, 0x20, 0x35, 0x89, 0xb6,
+	0x7d, 0x01, 0xcc, 0xac, 0x09, 0x9b, 0xfd, 0xca, 0x7e, 0xb7, 0xa9, 0xc7, 0x6f, 0x01, 0x00, 0x00,
+	0xff, 0xff, 0x61, 0xcc, 0xfb, 0x30, 0x7f, 0x05, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.proto b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.proto
index 1ccdfa2..1777be3 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service.proto
@@ -18,7 +18,7 @@
 
 import "google.golang.org/cloud/bigtable/internal/table_data_proto/bigtable_table_data.proto";
 import "google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.proto";
-import "google.golang.org/cloud/bigtable/internal/empty/empty.proto";
+import "google/protobuf/empty.proto";
 
 option java_multiple_files = true;
 option java_outer_classname = "BigtableTableServicesProto";
@@ -62,4 +62,8 @@
   // Permanently deletes a specified column family and all of its data.
   rpc DeleteColumnFamily(DeleteColumnFamilyRequest) returns (google.protobuf.Empty) {
   }
+
+  // Delete all rows in a table corresponding to a particular prefix
+  rpc BulkDeleteRows(BulkDeleteRowsRequest) returns (google.protobuf.Empty) {
+  }
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.pb.go b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.pb.go
index 7005226..ed84ec6 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.pb.go
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.pb.go
@@ -18,6 +18,7 @@
 	RenameTableRequest
 	CreateColumnFamilyRequest
 	DeleteColumnFamilyRequest
+	BulkDeleteRowsRequest
 */
 package google_bigtable_admin_table_v1
 
@@ -31,12 +32,16 @@
 var _ = fmt.Errorf
 var _ = math.Inf
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
 type CreateTableRequest struct {
 	// The unique name of the cluster in which to create the new table.
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The name by which the new table should be referred to within the cluster,
 	// e.g. "foobar" rather than "<cluster_name>/tables/foobar".
-	TableId string `protobuf:"bytes,2,opt,name=table_id" json:"table_id,omitempty"`
+	TableId string `protobuf:"bytes,2,opt,name=table_id,json=tableId" json:"table_id,omitempty"`
 	// The Table to create. The `name` field of the Table and all of its
 	// ColumnFamilies must be left blank, and will be populated in the response.
 	Table *google_bigtable_admin_table_v11.Table `protobuf:"bytes,3,opt,name=table" json:"table,omitempty"`
@@ -55,7 +60,7 @@
 	//    - Tablet 3 [customer_1, customer_2) => {"customer_1"}.
 	//    - Tablet 4 [customer_2, other)      => {"customer_2"}.
 	//    - Tablet 5 [other, )                => {"other", "zz"}.
-	InitialSplitKeys []string `protobuf:"bytes,4,rep,name=initial_split_keys" json:"initial_split_keys,omitempty"`
+	InitialSplitKeys []string `protobuf:"bytes,4,rep,name=initial_split_keys,json=initialSplitKeys" json:"initial_split_keys,omitempty"`
 }
 
 func (m *CreateTableRequest) Reset()                    { *m = CreateTableRequest{} }
@@ -123,7 +128,7 @@
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The new name by which the table should be referred to within its containing
 	// cluster, e.g. "foobar" rather than "<cluster_name>/tables/foobar".
-	NewId string `protobuf:"bytes,2,opt,name=new_id" json:"new_id,omitempty"`
+	NewId string `protobuf:"bytes,2,opt,name=new_id,json=newId" json:"new_id,omitempty"`
 }
 
 func (m *RenameTableRequest) Reset()                    { *m = RenameTableRequest{} }
@@ -136,9 +141,9 @@
 	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
 	// The name by which the new column family should be referred to within the
 	// table, e.g. "foobar" rather than "<table_name>/columnFamilies/foobar".
-	ColumnFamilyId string `protobuf:"bytes,2,opt,name=column_family_id" json:"column_family_id,omitempty"`
+	ColumnFamilyId string `protobuf:"bytes,2,opt,name=column_family_id,json=columnFamilyId" json:"column_family_id,omitempty"`
 	// The column family to create. The `name` field must be left blank.
-	ColumnFamily *google_bigtable_admin_table_v11.ColumnFamily `protobuf:"bytes,3,opt,name=column_family" json:"column_family,omitempty"`
+	ColumnFamily *google_bigtable_admin_table_v11.ColumnFamily `protobuf:"bytes,3,opt,name=column_family,json=columnFamily" json:"column_family,omitempty"`
 }
 
 func (m *CreateColumnFamilyRequest) Reset()                    { *m = CreateColumnFamilyRequest{} }
@@ -163,6 +168,124 @@
 func (*DeleteColumnFamilyRequest) ProtoMessage()               {}
 func (*DeleteColumnFamilyRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
 
+type BulkDeleteRowsRequest struct {
+	// The unique name of the table on which to perform the bulk delete
+	TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
+	// Types that are valid to be assigned to Target:
+	//	*BulkDeleteRowsRequest_RowKeyPrefix
+	//	*BulkDeleteRowsRequest_DeleteAllDataFromTable
+	Target isBulkDeleteRowsRequest_Target `protobuf_oneof:"target"`
+}
+
+func (m *BulkDeleteRowsRequest) Reset()                    { *m = BulkDeleteRowsRequest{} }
+func (m *BulkDeleteRowsRequest) String() string            { return proto.CompactTextString(m) }
+func (*BulkDeleteRowsRequest) ProtoMessage()               {}
+func (*BulkDeleteRowsRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
+
+type isBulkDeleteRowsRequest_Target interface {
+	isBulkDeleteRowsRequest_Target()
+}
+
+type BulkDeleteRowsRequest_RowKeyPrefix struct {
+	RowKeyPrefix []byte `protobuf:"bytes,2,opt,name=row_key_prefix,json=rowKeyPrefix,proto3,oneof"`
+}
+type BulkDeleteRowsRequest_DeleteAllDataFromTable struct {
+	DeleteAllDataFromTable bool `protobuf:"varint,3,opt,name=delete_all_data_from_table,json=deleteAllDataFromTable,oneof"`
+}
+
+func (*BulkDeleteRowsRequest_RowKeyPrefix) isBulkDeleteRowsRequest_Target()           {}
+func (*BulkDeleteRowsRequest_DeleteAllDataFromTable) isBulkDeleteRowsRequest_Target() {}
+
+func (m *BulkDeleteRowsRequest) GetTarget() isBulkDeleteRowsRequest_Target {
+	if m != nil {
+		return m.Target
+	}
+	return nil
+}
+
+func (m *BulkDeleteRowsRequest) GetRowKeyPrefix() []byte {
+	if x, ok := m.GetTarget().(*BulkDeleteRowsRequest_RowKeyPrefix); ok {
+		return x.RowKeyPrefix
+	}
+	return nil
+}
+
+func (m *BulkDeleteRowsRequest) GetDeleteAllDataFromTable() bool {
+	if x, ok := m.GetTarget().(*BulkDeleteRowsRequest_DeleteAllDataFromTable); ok {
+		return x.DeleteAllDataFromTable
+	}
+	return false
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*BulkDeleteRowsRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _BulkDeleteRowsRequest_OneofMarshaler, _BulkDeleteRowsRequest_OneofUnmarshaler, _BulkDeleteRowsRequest_OneofSizer, []interface{}{
+		(*BulkDeleteRowsRequest_RowKeyPrefix)(nil),
+		(*BulkDeleteRowsRequest_DeleteAllDataFromTable)(nil),
+	}
+}
+
+func _BulkDeleteRowsRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*BulkDeleteRowsRequest)
+	// target
+	switch x := m.Target.(type) {
+	case *BulkDeleteRowsRequest_RowKeyPrefix:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		b.EncodeRawBytes(x.RowKeyPrefix)
+	case *BulkDeleteRowsRequest_DeleteAllDataFromTable:
+		t := uint64(0)
+		if x.DeleteAllDataFromTable {
+			t = 1
+		}
+		b.EncodeVarint(3<<3 | proto.WireVarint)
+		b.EncodeVarint(t)
+	case nil:
+	default:
+		return fmt.Errorf("BulkDeleteRowsRequest.Target has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _BulkDeleteRowsRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*BulkDeleteRowsRequest)
+	switch tag {
+	case 2: // target.row_key_prefix
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeRawBytes(true)
+		m.Target = &BulkDeleteRowsRequest_RowKeyPrefix{x}
+		return true, err
+	case 3: // target.delete_all_data_from_table
+		if wire != proto.WireVarint {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeVarint()
+		m.Target = &BulkDeleteRowsRequest_DeleteAllDataFromTable{x != 0}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _BulkDeleteRowsRequest_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*BulkDeleteRowsRequest)
+	// target
+	switch x := m.Target.(type) {
+	case *BulkDeleteRowsRequest_RowKeyPrefix:
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(len(x.RowKeyPrefix)))
+		n += len(x.RowKeyPrefix)
+	case *BulkDeleteRowsRequest_DeleteAllDataFromTable:
+		n += proto.SizeVarint(3<<3 | proto.WireVarint)
+		n += 1
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
 func init() {
 	proto.RegisterType((*CreateTableRequest)(nil), "google.bigtable.admin.table.v1.CreateTableRequest")
 	proto.RegisterType((*ListTablesRequest)(nil), "google.bigtable.admin.table.v1.ListTablesRequest")
@@ -172,31 +295,41 @@
 	proto.RegisterType((*RenameTableRequest)(nil), "google.bigtable.admin.table.v1.RenameTableRequest")
 	proto.RegisterType((*CreateColumnFamilyRequest)(nil), "google.bigtable.admin.table.v1.CreateColumnFamilyRequest")
 	proto.RegisterType((*DeleteColumnFamilyRequest)(nil), "google.bigtable.admin.table.v1.DeleteColumnFamilyRequest")
+	proto.RegisterType((*BulkDeleteRowsRequest)(nil), "google.bigtable.admin.table.v1.BulkDeleteRowsRequest")
 }
 
 var fileDescriptor0 = []byte{
-	// 368 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x52, 0x5d, 0x4f, 0x2a, 0x31,
-	0x10, 0xcd, 0x5e, 0xb8, 0xe4, 0x32, 0xd7, 0x0f, 0xec, 0xd3, 0xc2, 0x83, 0x42, 0x13, 0x13, 0x4c,
-	0xcc, 0x6e, 0x44, 0xfd, 0x03, 0x60, 0x34, 0x46, 0x4d, 0x08, 0xf2, 0xbe, 0x29, 0xbb, 0xe3, 0xa6,
-	0xb1, 0xdb, 0xe2, 0xb6, 0x60, 0xf8, 0x03, 0xc6, 0x9f, 0x2d, 0xdb, 0x5d, 0x11, 0x12, 0x04, 0x7d,
-	0xeb, 0xf4, 0x9c, 0x76, 0xce, 0x9c, 0x39, 0x10, 0xc6, 0x4a, 0xc5, 0x02, 0xbd, 0x58, 0x09, 0x26,
-	0x63, 0x4f, 0xa5, 0xb1, 0x1f, 0x0a, 0x35, 0x89, 0xfc, 0x11, 0x8f, 0x0d, 0x1b, 0x09, 0xf4, 0xb9,
-	0x34, 0x98, 0x4a, 0x26, 0x7c, 0x5b, 0x06, 0x1a, 0xd3, 0x29, 0x0f, 0x31, 0x18, 0xa7, 0xca, 0xa8,
-	0x05, 0x2b, 0x58, 0x05, 0x13, 0xd4, 0x9a, 0xc5, 0xa8, 0x3d, 0xcb, 0x22, 0x87, 0x45, 0x93, 0x4f,
-	0xb6, 0xc7, 0xa2, 0x84, 0x4b, 0x2f, 0x3f, 0x4f, 0xcf, 0x1a, 0xc3, 0xdf, 0x8a, 0x88, 0x98, 0x61,
-	0xeb, 0x15, 0x64, 0x48, 0xde, 0x95, 0xbe, 0x39, 0x40, 0x7a, 0x29, 0x32, 0x83, 0xc3, 0x0c, 0x1a,
-	0xe0, 0xcb, 0x04, 0xb5, 0x21, 0x3b, 0x50, 0x96, 0x2c, 0x41, 0xd7, 0x69, 0x3a, 0xed, 0x2a, 0xa9,
-	0xc1, 0xbf, 0xfc, 0x21, 0x8f, 0xdc, 0x3f, 0xf6, 0xe6, 0x02, 0xfe, 0xda, 0x1b, 0xb7, 0x34, 0x2f,
-	0xff, 0x77, 0x8e, 0xbd, 0xcd, 0xe2, 0x3d, 0xfb, 0x39, 0x69, 0x00, 0xe1, 0x92, 0x1b, 0xce, 0x44,
-	0xa0, 0xc7, 0x82, 0x9b, 0xe0, 0x19, 0x67, 0xda, 0x2d, 0x37, 0x4b, 0xed, 0x2a, 0x6d, 0xc1, 0xc1,
-	0x3d, 0xd7, 0xc6, 0x12, 0xf5, 0x5a, 0x19, 0xf4, 0x0e, 0xc8, 0x32, 0x45, 0x8f, 0x95, 0xd4, 0x48,
-	0x2e, 0xa1, 0x62, 0xdb, 0xe8, 0x39, 0xab, 0xf4, 0x63, 0x2d, 0xf4, 0x08, 0xf6, 0x6f, 0xd0, 0x7c,
-	0x3f, 0x34, 0xa5, 0x40, 0xae, 0x50, 0xe0, 0x26, 0x63, 0x68, 0x07, 0xc8, 0x00, 0xb3, 0x7a, 0x83,
-	0x79, 0x7b, 0x50, 0x91, 0xf8, 0xba, 0xb0, 0x8e, 0xbe, 0x3b, 0x50, 0xcf, 0x1d, 0xef, 0x29, 0x31,
-	0x49, 0xe4, 0x35, 0x4b, 0xb8, 0x98, 0xad, 0x7f, 0xeb, 0x42, 0x2d, 0xb4, 0xa4, 0xe0, 0xc9, 0xb2,
-	0xbe, 0x16, 0xd0, 0x83, 0xdd, 0x15, 0xa4, 0x58, 0xc4, 0xe9, 0xb6, 0xe1, 0x97, 0x7b, 0xd2, 0x13,
-	0xa8, 0xe7, 0x23, 0x6e, 0x55, 0xd2, 0xbd, 0x05, 0x1a, 0xaa, 0x64, 0xcb, 0xef, 0xdd, 0x56, 0xb7,
-	0x00, 0xac, 0x1f, 0x8f, 0x79, 0xd0, 0x1f, 0x8a, 0x9c, 0xf7, 0xb3, 0xc0, 0xf5, 0x9d, 0x51, 0xc5,
-	0x26, 0xef, 0xfc, 0x23, 0x00, 0x00, 0xff, 0xff, 0xe1, 0x1c, 0x6f, 0x09, 0x56, 0x03, 0x00, 0x00,
+	// 503 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x54, 0x51, 0x6f, 0xd3, 0x30,
+	0x10, 0x5e, 0xe8, 0x56, 0xda, 0xa3, 0x8c, 0x61, 0x69, 0xa8, 0x9d, 0x04, 0x2a, 0x96, 0x06, 0x7d,
+	0x98, 0x52, 0x01, 0x8f, 0x80, 0x10, 0xdd, 0x34, 0x56, 0x0d, 0x50, 0x49, 0xf7, 0x1e, 0xb9, 0xc9,
+	0x35, 0xb2, 0xe6, 0xd8, 0xc5, 0x76, 0x57, 0xfa, 0x87, 0x78, 0x42, 0xfc, 0x46, 0x14, 0x3b, 0xac,
+	0xad, 0x84, 0x9a, 0xf1, 0x76, 0xf6, 0x7d, 0x77, 0xf7, 0xf9, 0xbb, 0x3b, 0x43, 0x92, 0x29, 0x95,
+	0x09, 0x0c, 0x33, 0x25, 0x98, 0xcc, 0x42, 0xa5, 0xb3, 0x7e, 0x22, 0xd4, 0x3c, 0xed, 0x4f, 0x78,
+	0x66, 0xd9, 0x44, 0x60, 0x9f, 0x4b, 0x8b, 0x5a, 0x32, 0xd1, 0x77, 0xc7, 0xd8, 0xa0, 0xbe, 0xe1,
+	0x09, 0xc6, 0x33, 0xad, 0xac, 0xba, 0x45, 0xc5, 0x9b, 0xce, 0x1c, 0x8d, 0x61, 0x19, 0x9a, 0xd0,
+	0xa1, 0xc8, 0xb3, 0xb2, 0xc8, 0x5f, 0x74, 0xc8, 0xd2, 0x9c, 0xcb, 0xd0, 0xdb, 0x37, 0xaf, 0x8e,
+	0xae, 0xfe, 0x97, 0x44, 0xca, 0x2c, 0xfb, 0x37, 0x83, 0xc2, 0xe3, 0xab, 0xd2, 0xdf, 0x01, 0x90,
+	0x53, 0x8d, 0xcc, 0xe2, 0x55, 0xe1, 0x8a, 0xf0, 0xfb, 0x1c, 0x8d, 0x25, 0x04, 0x76, 0x25, 0xcb,
+	0xb1, 0x1d, 0x74, 0x83, 0x5e, 0x33, 0x72, 0x36, 0xe9, 0x40, 0xc3, 0x87, 0xf3, 0xb4, 0x7d, 0xcf,
+	0xdd, 0xdf, 0x77, 0xe7, 0x61, 0x4a, 0xde, 0xc2, 0x9e, 0x33, 0xdb, 0xb5, 0x6e, 0xd0, 0x7b, 0xf0,
+	0xfa, 0x38, 0xdc, 0xfe, 0x96, 0xd0, 0xd7, 0xf2, 0x31, 0xe4, 0x04, 0x08, 0x97, 0xdc, 0x72, 0x26,
+	0x62, 0x33, 0x13, 0xdc, 0xc6, 0xd7, 0xb8, 0x34, 0xed, 0xdd, 0x6e, 0xad, 0xd7, 0x8c, 0x0e, 0x4a,
+	0xcf, 0xb8, 0x70, 0x5c, 0xe2, 0xd2, 0xd0, 0x97, 0xf0, 0xf8, 0x33, 0x37, 0xd6, 0x65, 0x30, 0x5b,
+	0xe8, 0xd2, 0x31, 0x90, 0x75, 0xa0, 0x99, 0x29, 0x69, 0x90, 0xbc, 0x87, 0xba, 0xab, 0x6a, 0xda,
+	0x41, 0xb7, 0x76, 0x77, 0xaa, 0x65, 0x10, 0x3d, 0x86, 0x47, 0x9f, 0xd0, 0x56, 0x49, 0x45, 0x7b,
+	0x40, 0xce, 0x50, 0x60, 0xb5, 0xa8, 0xf4, 0x03, 0x90, 0x08, 0x0b, 0xab, 0x52, 0xfe, 0x43, 0xa8,
+	0x4b, 0x5c, 0xac, 0xc4, 0xdf, 0x93, 0xb8, 0x18, 0xa6, 0xf4, 0x57, 0x00, 0x1d, 0xdf, 0xc0, 0x53,
+	0x25, 0xe6, 0xb9, 0x3c, 0x67, 0x39, 0x17, 0xcb, 0x6d, 0x89, 0x7a, 0x70, 0x90, 0x38, 0x68, 0x3c,
+	0x75, 0xd8, 0x55, 0xca, 0xfd, 0x64, 0x2d, 0xc5, 0x30, 0x25, 0xdf, 0xe0, 0xe1, 0x06, 0xb2, 0x6c,
+	0xef, 0x49, 0x95, 0x66, 0x1b, 0x4c, 0x5a, 0xeb, 0x49, 0x69, 0x1f, 0x3a, 0x5e, 0x99, 0x3b, 0xb2,
+	0xa5, 0x3f, 0x03, 0x38, 0x1c, 0xcc, 0xc5, 0xb5, 0x8f, 0x8a, 0xd4, 0xe2, 0xb6, 0xe9, 0x4f, 0x01,
+	0xfc, 0x3c, 0xae, 0xc5, 0x34, 0xdd, 0xcd, 0xd7, 0xe2, 0x99, 0x2f, 0x60, 0x5f, 0xab, 0x45, 0x31,
+	0x4c, 0xf1, 0x4c, 0xe3, 0x94, 0xff, 0x70, 0x8f, 0x6c, 0x5d, 0xec, 0x44, 0x2d, 0xad, 0x16, 0x97,
+	0xb8, 0x1c, 0xb9, 0x5b, 0xf2, 0x0e, 0x8e, 0x52, 0x97, 0x3b, 0x66, 0x42, 0xf8, 0xa5, 0x99, 0x6a,
+	0x95, 0xc7, 0xab, 0x81, 0x6e, 0x5c, 0xec, 0x44, 0x4f, 0x3c, 0xe6, 0xa3, 0x10, 0x67, 0xcc, 0xb2,
+	0x73, 0xad, 0x72, 0xd7, 0xb0, 0x41, 0xa3, 0x98, 0x27, 0x9d, 0xa1, 0x1d, 0x0c, 0x81, 0x26, 0x2a,
+	0xaf, 0x90, 0x66, 0xf0, 0x7c, 0x50, 0x3a, 0x5c, 0xf8, 0xd8, 0x7f, 0x05, 0x5f, 0xca, 0x9f, 0x60,
+	0x54, 0xac, 0xe4, 0x28, 0x98, 0xd4, 0xdd, 0x6e, 0xbe, 0xf9, 0x13, 0x00, 0x00, 0xff, 0xff, 0xa1,
+	0x16, 0xdf, 0x02, 0x78, 0x04, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.proto b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.proto
index 9fa1b6a..264516e 100644
--- a/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.proto
+++ b/go/src/google.golang.org/cloud/bigtable/internal/table_service_proto/bigtable_table_service_messages.proto
@@ -99,3 +99,17 @@
   // The unique name of the column family to be deleted.
   string name = 1;
 }
+
+message BulkDeleteRowsRequest {
+  // The unique name of the table on which to perform the bulk delete
+  string table_name = 1;
+
+  oneof target {
+    // Delete all rows that start with this row key prefix. Prefix cannot be
+    // zero length.
+    bytes row_key_prefix = 2;
+
+    // Delete all rows in the table. Setting this to false is a no-op.
+    bool delete_all_data_from_table = 3;
+  }
+}
diff --git a/go/src/google.golang.org/cloud/datastore/datastore.go b/go/src/google.golang.org/cloud/datastore/datastore.go
index 958a544..624757b 100644
--- a/go/src/google.golang.org/cloud/datastore/datastore.go
+++ b/go/src/google.golang.org/cloud/datastore/datastore.go
@@ -12,10 +12,7 @@
 // See the License for the specific language governing permissions and
 // limitations under the License.
 
-// Package datastore contains a Google Cloud Datastore client.
-//
-// This package is experimental and may make backwards-incompatible changes.
-package datastore // import "google.golang.org/cloud/datastore"
+package datastore
 
 import (
 	"errors"
@@ -155,6 +152,16 @@
 		e.FieldName, e.StructType, e.Reason)
 }
 
+// GeoPoint represents a location as latitude/longitude in degrees.
+type GeoPoint struct {
+	Lat, Lng float64
+}
+
+// Valid returns whether a GeoPoint is within [-90, 90] latitude and [-180, 180] longitude.
+func (g GeoPoint) Valid() bool {
+	return -90 <= g.Lat && g.Lat <= 90 && -180 <= g.Lng && g.Lng <= 180
+}
+
 func keyToProto(k *Key) *pb.Key {
 	if k == nil {
 		return nil
diff --git a/go/src/google.golang.org/cloud/datastore/datastore_test.go b/go/src/google.golang.org/cloud/datastore/datastore_test.go
index e1d9771..5bb3eaa 100644
--- a/go/src/google.golang.org/cloud/datastore/datastore_test.go
+++ b/go/src/google.golang.org/cloud/datastore/datastore_test.go
@@ -69,11 +69,14 @@
 }
 
 var (
-	testKey0  = newKey("name0", nil)
-	testKey1a = newKey("name1", nil)
-	testKey1b = newKey("name1", nil)
-	testKey2a = newKey("name2", testKey0)
-	testKey2b = newKey("name2", testKey0)
+	testKey0     = newKey("name0", nil)
+	testKey1a    = newKey("name1", nil)
+	testKey1b    = newKey("name1", nil)
+	testKey2a    = newKey("name2", testKey0)
+	testKey2b    = newKey("name2", testKey0)
+	testGeoPt0   = GeoPoint{Lat: 1.2, Lng: 3.4}
+	testGeoPt1   = GeoPoint{Lat: 5, Lng: 10}
+	testBadGeoPt = GeoPoint{Lat: 1000, Lng: 34}
 )
 
 type B0 struct {
@@ -117,6 +120,14 @@
 
 type E struct{}
 
+type G0 struct {
+	G GeoPoint
+}
+
+type G1 struct {
+	G []GeoPoint
+}
+
 type K0 struct {
 	K *Key
 }
@@ -431,6 +442,36 @@
 		"",
 	},
 	{
+		"geopoint",
+		&G0{G: testGeoPt0},
+		&G0{G: testGeoPt0},
+		"",
+		"",
+	},
+	{
+		"geopoint invalid",
+		&G0{G: testBadGeoPt},
+		&G0{},
+		"invalid GeoPoint value",
+		"",
+	},
+	{
+		"geopoint as props",
+		&G0{G: testGeoPt0},
+		&PropertyList{
+			Property{Name: "G", Value: testGeoPt0, NoIndex: false},
+		},
+		"",
+		"",
+	},
+	{
+		"geopoint slice",
+		&G1{G: []GeoPoint{testGeoPt0, testGeoPt1}},
+		&G1{G: []GeoPoint{testGeoPt0, testGeoPt1}},
+		"",
+		"",
+	},
+	{
 		"key",
 		&K0{K: testKey1a},
 		&K0{K: testKey1b},
@@ -649,7 +690,7 @@
 			Property{Name: "B", Value: makeUint8Slice(1501), NoIndex: false},
 		},
 		nil,
-		"is too long to index",
+		"[]byte property too long to index",
 		"",
 	},
 	{
@@ -658,7 +699,31 @@
 			Property{Name: "B", Value: strings.Repeat("x", 1501), NoIndex: false},
 		},
 		nil,
-		"is too long to index",
+		"string property too long to index",
+		"",
+	},
+	{
+		"slice of []byte must be noindex",
+		&PropertyList{
+			Property{Name: "B", Value: []interface{}{
+				[]byte("short"),
+				makeUint8Slice(1501),
+			}, NoIndex: false},
+		},
+		nil,
+		"[]byte property too long to index",
+		"",
+	},
+	{
+		"slice of string must be noindex",
+		&PropertyList{
+			Property{Name: "B", Value: []interface{}{
+				"short",
+				strings.Repeat("x", 1501),
+			}, NoIndex: false},
+		},
+		nil,
+		"string property too long to index",
 		"",
 	},
 	{
@@ -672,9 +737,7 @@
 			Property{Name: "E", Value: int64(5), NoIndex: false},
 			Property{Name: "J", Value: int64(7), NoIndex: true},
 			Property{Name: "a", Value: int64(1), NoIndex: true},
-			Property{Name: "b", Value: int64(21), NoIndex: false, Multiple: true},
-			Property{Name: "b", Value: int64(22), NoIndex: false, Multiple: true},
-			Property{Name: "b", Value: int64(23), NoIndex: false, Multiple: true},
+			Property{Name: "b", Value: []interface{}{int64(21), int64(22), int64(23)}, NoIndex: false},
 		},
 		"",
 		"",
@@ -746,9 +809,7 @@
 			Property{Name: "F", Value: nil, NoIndex: false},
 			Property{Name: "K", Value: nil, NoIndex: false},
 			Property{Name: "T", Value: nil, NoIndex: false},
-			Property{Name: "J", Value: nil, NoIndex: false, Multiple: true},
-			Property{Name: "J", Value: int64(7), NoIndex: false, Multiple: true},
-			Property{Name: "J", Value: nil, NoIndex: false, Multiple: true},
+			Property{Name: "J", Value: []interface{}{nil, int64(7), nil}, NoIndex: false},
 		},
 		&struct {
 			I int64
@@ -782,12 +843,8 @@
 		},
 		&PropertyList{
 			Property{Name: "A", Value: int64(1), NoIndex: false},
-			Property{Name: "I.W", Value: int64(10), NoIndex: false, Multiple: true},
-			Property{Name: "I.W", Value: int64(20), NoIndex: false, Multiple: true},
-			Property{Name: "I.W", Value: int64(30), NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "ten", NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "twenty", NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "thirty", NoIndex: false, Multiple: true},
+			Property{Name: "I.W", Value: []interface{}{int64(10), int64(20), int64(30)}, NoIndex: false},
+			Property{Name: "I.X", Value: []interface{}{"ten", "twenty", "thirty"}, NoIndex: false},
 			Property{Name: "J.Y", Value: float64(3.14), NoIndex: false},
 			Property{Name: "Z", Value: true, NoIndex: false},
 		},
@@ -798,12 +855,8 @@
 		"save props load outer-equivalent",
 		&PropertyList{
 			Property{Name: "A", Value: int64(1), NoIndex: false},
-			Property{Name: "I.W", Value: int64(10), NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "ten", NoIndex: false, Multiple: true},
-			Property{Name: "I.W", Value: int64(20), NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "twenty", NoIndex: false, Multiple: true},
-			Property{Name: "I.W", Value: int64(30), NoIndex: false, Multiple: true},
-			Property{Name: "I.X", Value: "thirty", NoIndex: false, Multiple: true},
+			Property{Name: "I.W", Value: []interface{}{int64(10), int64(20), int64(30)}, NoIndex: false},
+			Property{Name: "I.X", Value: []interface{}{"ten", "twenty", "thirty"}, NoIndex: false},
 			Property{Name: "J.Y", Value: float64(3.14), NoIndex: false},
 			Property{Name: "Z", Value: true, NoIndex: false},
 		},
@@ -1039,30 +1092,18 @@
 		},
 		&PropertyList{
 			Property{Name: "Blue.I", Value: int64(0), NoIndex: false},
-			Property{Name: "Blue.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blu0", NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blu1", NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blu2", NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blu3", NoIndex: false, Multiple: true},
+			Property{Name: "Blue.Nonymous.I", Value: []interface{}{int64(0), int64(0), int64(0), int64(0)}, NoIndex: false},
+			Property{Name: "Blue.Nonymous.S", Value: []interface{}{"blu0", "blu1", "blu2", "blu3"}, NoIndex: false},
 			Property{Name: "Blue.Other", Value: "", NoIndex: false},
 			Property{Name: "Blue.S", Value: "bleu", NoIndex: false},
 			Property{Name: "green.I", Value: int64(0), NoIndex: false},
-			Property{Name: "green.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.S", Value: "verde0", NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.S", Value: "verde1", NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.S", Value: "verde2", NoIndex: false, Multiple: true},
+			Property{Name: "green.Nonymous.I", Value: []interface{}{int64(0), int64(0), int64(0)}, NoIndex: false},
+			Property{Name: "green.Nonymous.S", Value: []interface{}{"verde0", "verde1", "verde2"}, NoIndex: false},
 			Property{Name: "green.Other", Value: "", NoIndex: false},
 			Property{Name: "green.S", Value: "vert", NoIndex: false},
 			Property{Name: "red.I", Value: int64(0), NoIndex: false},
-			Property{Name: "red.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "red.Nonymous.I", Value: int64(0), NoIndex: false, Multiple: true},
-			Property{Name: "red.Nonymous.S", Value: "rosso0", NoIndex: false, Multiple: true},
-			Property{Name: "red.Nonymous.S", Value: "rosso1", NoIndex: false, Multiple: true},
+			Property{Name: "red.Nonymous.I", Value: []interface{}{int64(0), int64(0)}, NoIndex: false},
+			Property{Name: "red.Nonymous.S", Value: []interface{}{"rosso0", "rosso1"}, NoIndex: false},
 			Property{Name: "red.Other", Value: "", NoIndex: false},
 			Property{Name: "red.S", Value: "rouge", NoIndex: false},
 		},
@@ -1073,15 +1114,9 @@
 		"save props load structs with ragged fields",
 		&PropertyList{
 			Property{Name: "red.S", Value: "rot", NoIndex: false},
-			Property{Name: "green.Nonymous.I", Value: int64(10), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.I", Value: int64(11), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.I", Value: int64(12), NoIndex: false, Multiple: true},
-			Property{Name: "green.Nonymous.I", Value: int64(13), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blau0", NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.I", Value: int64(20), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blau1", NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.I", Value: int64(21), NoIndex: false, Multiple: true},
-			Property{Name: "Blue.Nonymous.S", Value: "blau2", NoIndex: false, Multiple: true},
+			Property{Name: "green.Nonymous.I", Value: []interface{}{int64(10), int64(11), int64(12), int64(13)}, NoIndex: false},
+			Property{Name: "Blue.Nonymous.I", Value: []interface{}{int64(20), int64(21)}, NoIndex: false},
+			Property{Name: "Blue.Nonymous.S", Value: []interface{}{"blau0", "blau1", "blau2"}, NoIndex: false},
 		},
 		&N2{
 			N1: N1{
@@ -1195,6 +1230,16 @@
 		"",
 		"",
 	},
+	{
+		"repeated property names",
+		&PropertyList{
+			Property{Name: "A", Value: ""},
+			Property{Name: "A", Value: ""},
+		},
+		nil,
+		"duplicate Property",
+		"",
+	},
 }
 
 // checkErr returns the empty string if either both want and err are zero,
@@ -1245,7 +1290,8 @@
 			equal = reflect.DeepEqual(got, tc.want)
 		}
 		if !equal {
-			t.Errorf("%s: compare:\ngot:  %v\nwant: %v", tc.desc, got, tc.want)
+			t.Errorf("%s: compare:\ngot:  %#v\nwant: %#v", tc.desc, got, tc.want)
+			t.Logf("intermediate proto (%s):\n%s", tc.desc, proto.MarshalTextString(p))
 			continue
 		}
 	}
@@ -1514,8 +1560,84 @@
 	}
 }
 
+func TestNoIndexOnSliceProperties(t *testing.T) {
+	// Check that ExcludeFromIndexes is set on the inner elements,
+	// rather than the top-level ArrayValue value.
+	ctx := context.Background()
+	pl := PropertyList{
+		Property{
+			Name: "repeated",
+			Value: []interface{}{
+				123,
+				false,
+				"short",
+				strings.Repeat("a", 1503),
+			},
+			NoIndex: true,
+		},
+	}
+	key := NewKey(ctx, "dummy", "dummy", 0, nil)
+
+	entity, err := saveEntity(key, &pl)
+	if err != nil {
+		t.Fatalf("saveEntity: %v", err)
+	}
+
+	want := &pb.Value{
+		ValueType: &pb.Value_ArrayValue{&pb.ArrayValue{[]*pb.Value{
+			{ValueType: &pb.Value_IntegerValue{123}, ExcludeFromIndexes: true},
+			{ValueType: &pb.Value_BooleanValue{false}, ExcludeFromIndexes: true},
+			{ValueType: &pb.Value_StringValue{"short"}, ExcludeFromIndexes: true},
+			{ValueType: &pb.Value_StringValue{strings.Repeat("a", 1503)}, ExcludeFromIndexes: true},
+		}}},
+	}
+	if got := entity.Properties["repeated"]; !proto.Equal(got, want) {
+		t.Errorf("Entity proto differs\ngot:  %v\nwant: %v", got, want)
+	}
+}
+
 type byName PropertyList
 
 func (s byName) Len() int           { return len(s) }
 func (s byName) Less(i, j int) bool { return s[i].Name < s[j].Name }
 func (s byName) Swap(i, j int)      { s[i], s[j] = s[j], s[i] }
+
+func TestValidGeoPoint(t *testing.T) {
+	testCases := []struct {
+		desc string
+		pt   GeoPoint
+		want bool
+	}{
+		{
+			"valid",
+			GeoPoint{67.21, 13.37},
+			true,
+		},
+		{
+			"high lat",
+			GeoPoint{-90.01, 13.37},
+			false,
+		},
+		{
+			"low lat",
+			GeoPoint{90.01, 13.37},
+			false,
+		},
+		{
+			"high lng",
+			GeoPoint{67.21, 182},
+			false,
+		},
+		{
+			"low lng",
+			GeoPoint{67.21, -181},
+			false,
+		},
+	}
+
+	for _, tc := range testCases {
+		if got := tc.pt.Valid(); got != tc.want {
+			t.Errorf("%s: got %v, want %v", tc.desc, got, tc.want)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/datastore/doc.go b/go/src/google.golang.org/cloud/datastore/doc.go
new file mode 100644
index 0000000..6b55acf
--- /dev/null
+++ b/go/src/google.golang.org/cloud/datastore/doc.go
@@ -0,0 +1,306 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by the Apache 2.0
+// license that can be found in the LICENSE file.
+
+/*
+Package datastore provides a client for Google Cloud Datastore.
+
+
+Basic Operations
+
+Entities are the unit of storage and are associated with a key. A key
+consists of an optional parent key, a string application ID, a string kind
+(also known as an entity type), and either a StringID or an IntID. A
+StringID is also known as an entity name or key name.
+
+It is valid to create a key with a zero StringID and a zero IntID; this is
+called an incomplete key, and does not refer to any saved entity. Putting an
+entity into the datastore under an incomplete key will cause a unique key
+to be generated for that entity, with a non-zero IntID.
+
+An entity's contents are a mapping from case-sensitive field names to values.
+Valid value types are:
+  - signed integers (int, int8, int16, int32 and int64),
+  - bool,
+  - string,
+  - float32 and float64,
+  - []byte (up to 1 megabyte in length),
+  - any type whose underlying type is one of the above predeclared types,
+  - *Key,
+  - GeoPoint,
+  - time.Time (stored with microsecond precision),
+  - structs whose fields are all valid value types,
+  - slices of any of the above.
+
+Slices of structs are valid, as are structs that contain slices. However, if
+one struct contains another, then at most one of those can be repeated. This
+disqualifies recursively defined struct types: any struct T that (directly or
+indirectly) contains a []T.
+
+The Get and Put functions load and save an entity's contents. An entity's
+contents are typically represented by a struct pointer.
+
+Example code:
+
+	type Entity struct {
+		Value string
+	}
+
+	func main() {
+		ctx := context.Background()
+
+		// Create a datastore client. In a typical application, you would create
+		// a single client which is reused for every datastore operation.
+		client, err := datastore.NewClient(ctx, "my-project")
+		if err != nil {
+			// Handle error.
+		}
+
+		k := datastore.NewKey(ctx, "Entity", "stringID", 0, nil)
+		e := new(Entity)
+		if err := client.Get(ctx, k, e); err != nil {
+			// Handle error.
+		}
+
+		old := e.Value
+		e.Value = "Hello World!"
+
+		if _, err := client.Put(ctx, k, e); err != nil {
+			// Handle error.
+		}
+
+		fmt.Printf("Updated value from %q to %q\n", old, e.Value)
+	}
+
+GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and
+Delete functions. They take a []*Key instead of a *Key, and may return a
+datastore.MultiError when encountering partial failure.
+
+
+Properties
+
+An entity's contents can be represented by a variety of types. These are
+typically struct pointers, but can also be any type that implements the
+PropertyLoadSaver interface. If using a struct pointer, you do not have to
+explicitly implement the PropertyLoadSaver interface; the datastore will
+automatically convert via reflection. If a struct pointer does implement that
+interface then those methods will be used in preference to the default
+behavior for struct pointers. Struct pointers are more strongly typed and are
+easier to use; PropertyLoadSavers are more flexible.
+
+The actual types passed do not have to match between Get and Put calls or even
+across different calls to datastore. It is valid to put a *PropertyList and
+get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1.
+Conceptually, any entity is saved as a sequence of properties, and is loaded
+into the destination value on a property-by-property basis. When loading into
+a struct pointer, an entity that cannot be completely represented (such as a
+missing field) will result in an ErrFieldMismatch error but it is up to the
+caller whether this error is fatal, recoverable or ignorable.
+
+By default, for struct pointers, all properties are potentially indexed, and
+the property name is the same as the field name (and hence must start with an
+upper case letter). Fields may have a `datastore:"name,options"` tag. The tag
+name is the property name, which must be one or more valid Go identifiers
+joined by ".", but may start with a lower case letter. An empty tag name means
+to just use the field name. A "-" tag name means that the datastore will
+ignore that field. If options is "noindex" then the field will not be indexed.
+If the options is "" then the comma may be omitted. There are no other
+recognized options.
+
+All fields are indexed by default. Strings or byte slices longer than 1500
+bytes cannot be indexed; fields used to store long strings and byte slices must
+be tagged with "noindex" or they will cause Put operations to fail.
+
+Example code:
+
+	// A and B are renamed to a and b.
+	// A, C and J are not indexed.
+	// D's tag is equivalent to having no tag at all (E).
+	// I is ignored entirely by the datastore.
+	// J has tag information for both the datastore and json packages.
+	type TaggedStruct struct {
+		A int `datastore:"a,noindex"`
+		B int `datastore:"b"`
+		C int `datastore:",noindex"`
+		D int `datastore:""`
+		E int
+		I int `datastore:"-"`
+		J int `datastore:",noindex" json:"j"`
+	}
+
+
+Structured Properties
+
+If the struct pointed to contains other structs, then the nested or embedded
+structs are flattened. For example, given these definitions:
+
+	type Inner1 struct {
+		W int32
+		X string
+	}
+
+	type Inner2 struct {
+		Y float64
+	}
+
+	type Inner3 struct {
+		Z bool
+	}
+
+	type Outer struct {
+		A int16
+		I []Inner1
+		J Inner2
+		Inner3
+	}
+
+then an Outer's properties would be equivalent to those of:
+
+	type OuterEquivalent struct {
+		A     int16
+		IDotW []int32  `datastore:"I.W"`
+		IDotX []string `datastore:"I.X"`
+		JDotY float64  `datastore:"J.Y"`
+		Z     bool
+	}
+
+If Outer's embedded Inner3 field was tagged as `datastore:"Foo"` then the
+equivalent field would instead be: FooDotZ bool `datastore:"Foo.Z"`.
+
+If an outer struct is tagged "noindex" then all of its implicit flattened
+fields are effectively "noindex".
+
+
+The PropertyLoadSaver Interface
+
+An entity's contents can also be represented by any type that implements the
+PropertyLoadSaver interface. This type may be a struct pointer, but it does
+not have to be. The datastore package will call Load when getting the entity's
+contents, and Save when putting the entity's contents.
+Possible uses include deriving non-stored fields, verifying fields, or indexing
+a field only if its value is positive.
+
+Example code:
+
+	type CustomPropsExample struct {
+		I, J int
+		// Sum is not stored, but should always be equal to I + J.
+		Sum int `datastore:"-"`
+	}
+
+	func (x *CustomPropsExample) Load(ps []datastore.Property) error {
+		// Load I and J as usual.
+		if err := datastore.LoadStruct(x, ps); err != nil {
+			return err
+		}
+		// Derive the Sum field.
+		x.Sum = x.I + x.J
+		return nil
+	}
+
+	func (x *CustomPropsExample) Save() ([]datastore.Property, error) {
+		// Validate the Sum field.
+		if x.Sum != x.I + x.J {
+			return errors.New("CustomPropsExample has inconsistent sum")
+		}
+		// Save I and J as usual. The code below is equivalent to calling
+		// "return datastore.SaveStruct(x)", but is done manually for
+		// demonstration purposes.
+		return []datastore.Property{
+			{
+				Name:  "I",
+				Value: int64(x.I),
+			},
+			{
+				Name:  "J",
+				Value: int64(x.J),
+			},
+		}
+	}
+
+The *PropertyList type implements PropertyLoadSaver, and can therefore hold an
+arbitrary entity's contents.
+
+
+Queries
+
+Queries retrieve entities based on their properties or key's ancestry. Running
+a query yields an iterator of results: either keys or (key, entity) pairs.
+Queries are re-usable and it is safe to call Query.Run from concurrent
+goroutines. Iterators are not safe for concurrent use.
+
+Queries are immutable, and are either created by calling NewQuery, or derived
+from an existing query by calling a method like Filter or Order that returns a
+new query value. A query is typically constructed by calling NewQuery followed
+by a chain of zero or more such methods. These methods are:
+  - Ancestor and Filter constrain the entities returned by running a query.
+  - Order affects the order in which they are returned.
+  - Project constrains the fields returned.
+  - Distinct de-duplicates projected entities.
+  - KeysOnly makes the iterator return only keys, not (key, entity) pairs.
+  - Start, End, Offset and Limit define which sub-sequence of matching entities
+    to return. Start and End take cursors, Offset and Limit take integers. Start
+    and Offset affect the first result, End and Limit affect the last result.
+    If both Start and Offset are set, then the offset is relative to Start.
+    If both End and Limit are set, then the earliest constraint wins. Limit is
+    relative to Start+Offset, not relative to End. As a special case, a
+    negative limit means unlimited.
+
+Example code:
+
+	type Widget struct {
+		Description string
+		Price       int
+	}
+
+	func printWidgets(ctx context.Context, client *datastore.Client) {
+		q := datastore.NewQuery("Widget").
+			Filter("Price <", 1000).
+			Order("-Price")
+		for t := client.Run(ctx, q); ; {
+			var x Widget
+			key, err := t.Next(&x)
+			if err == datastore.Done {
+				break
+			}
+			if err != nil {
+				// Handle error.
+			}
+			fmt.Printf("Key=%v\nWidget=%#v\n\n", key, x)
+		}
+	}
+
+
+Transactions
+
+Client.RunInTransaction runs a function in a transaction.
+
+Example code:
+
+	type Counter struct {
+		Count int
+	}
+
+	func incCount(ctx context.Context, client *datastore.Client) {
+		var count int
+		key := datastore.NewKey(ctx, "Counter", "singleton", 0, nil)
+		err := client.RunInTransaction(ctx, func(tx *datastore.Transaction) error {
+			var x Counter
+			if err := tx.Get(key, &x); err != nil && err != datastore.ErrNoSuchEntity {
+				return err
+			}
+			x.Count++
+			if _, err := tx.Put(key, &x); err != nil {
+				return err
+			}
+			count = x.Count
+		}, nil)
+		if err != nil {
+			// Handle error.
+		}
+		// The value of count is only valid once the transaction is successful
+		// (RunInTransaction has returned nil).
+		fmt.Printf("Count=%d\n", count)
+	}
+*/
+package datastore // import "google.golang.org/cloud/datastore"
diff --git a/go/src/google.golang.org/cloud/datastore/integration_test.go b/go/src/google.golang.org/cloud/datastore/integration_test.go
index c199c42..6e2b019 100644
--- a/go/src/google.golang.org/cloud/datastore/integration_test.go
+++ b/go/src/google.golang.org/cloud/datastore/integration_test.go
@@ -20,6 +20,7 @@
 	"reflect"
 	"sort"
 	"strings"
+	"sync"
 	"testing"
 	"time"
 
@@ -90,9 +91,7 @@
 	defer client.Close()
 
 	p0 := PropertyList{
-		{Name: "L", Value: int64(12), Multiple: true},
-		{Name: "L", Value: "string", Multiple: true},
-		{Name: "L", Value: true, Multiple: true},
+		{Name: "L", Value: []interface{}{int64(12), "string", true}},
 	}
 	k, err := client.Put(ctx, NewIncompleteKey(ctx, "ListValue", nil), &p0)
 	if err != nil {
@@ -363,6 +362,171 @@
 	})
 }
 
+func TestLargeQuery(t *testing.T) {
+	if testing.Short() {
+		t.Skip("Integration tests skipped in short mode")
+	}
+	ctx := context.Background()
+	client := newClient(ctx, t)
+	defer client.Close()
+
+	parent := NewKey(ctx, "LQParent", "TestFilters"+suffix, 0, nil)
+	now := time.Now().Truncate(time.Millisecond).Unix()
+
+	// Make a large number of children entities.
+	const n = 800
+	children := make([]*SQChild, 0, n)
+	keys := make([]*Key, 0, n)
+	for i := 0; i < n; i++ {
+		children = append(children, &SQChild{I: i, T: now, U: now})
+		keys = append(keys, NewIncompleteKey(ctx, "SQChild", parent))
+	}
+
+	// Store using PutMulti in batches.
+	const batchSize = 500
+	for i := 0; i < n; i = i + 500 {
+		j := i + batchSize
+		if j > n {
+			j = n
+		}
+		fullKeys, err := client.PutMulti(ctx, keys[i:j], children[i:j])
+		if err != nil {
+			t.Fatalf("PutMulti(%d, %d): %v", i, j, err)
+		}
+		defer func() {
+			err := client.DeleteMulti(ctx, fullKeys)
+			if err != nil {
+				t.Errorf("client.DeleteMulti: %v", err)
+			}
+		}()
+	}
+
+	q := NewQuery("SQChild").Ancestor(parent).Filter("T=", now).Order("I")
+
+	// Wait group to allow us to run query tests in parallel below.
+	var wg sync.WaitGroup
+
+	// Check we get the expected count and results for various limits/offsets.
+	queryTests := []struct {
+		limit, offset, want int
+	}{
+		// Just limit.
+		{limit: 0, want: 0},
+		{limit: 100, want: 100},
+		{limit: 501, want: 501},
+		{limit: n, want: n},
+		{limit: n * 2, want: n},
+		{limit: -1, want: n},
+		// Just offset.
+		{limit: -1, offset: 100, want: n - 100},
+		{limit: -1, offset: 500, want: n - 500},
+		{limit: -1, offset: n, want: 0},
+		// Limit and offset.
+		{limit: 100, offset: 100, want: 100},
+		{limit: 1000, offset: 100, want: n - 100},
+		{limit: 500, offset: 500, want: n - 500},
+	}
+	for _, tt := range queryTests {
+		q := q.Limit(tt.limit).Offset(tt.offset)
+		wg.Add(1)
+
+		go func(limit, offset, want int) {
+			defer wg.Done()
+			// Check Count returns the expected number of results.
+			count, err := client.Count(ctx, q)
+			if err != nil {
+				t.Errorf("client.Count(limit=%d offset=%d): %v", limit, offset, err)
+				return
+			}
+			if count != want {
+				t.Errorf("Count(limit=%d offset=%d) returned %d, want %d", limit, offset, count, want)
+			}
+
+			var got []SQChild
+			_, err = client.GetAll(ctx, q, &got)
+			if err != nil {
+				t.Errorf("client.GetAll(limit=%d offset=%d): %v", limit, offset, err)
+				return
+			}
+			if len(got) != want {
+				t.Errorf("GetAll(limit=%d offset=%d) returned %d, want %d", limit, offset, len(got), want)
+			}
+			for i, child := range got {
+				if got, want := child.I, i+offset; got != want {
+					t.Errorf("GetAll(limit=%d offset=%d) got[%d].I == %d; want %d", limit, got, want)
+					break
+				}
+			}
+		}(tt.limit, tt.offset, tt.want)
+	}
+
+	// Also check iterator cursor behaviour.
+	cursorTests := []struct {
+		limit, offset int // Query limit and offset.
+		count         int // The number of times to call "next"
+		want          int // The I value of the desired element, -1 for "Done".
+	}{
+		// No limits.
+		{count: 0, limit: -1, want: 0},
+		{count: 5, limit: -1, want: 5},
+		{count: 500, limit: -1, want: 500},
+		{count: 1000, limit: -1, want: -1}, // No more results.
+		// Limits.
+		{count: 5, limit: 5, want: 5},
+		{count: 500, limit: 5, want: 5},
+		{count: 1000, limit: 1000, want: -1}, // No more results.
+		// Offsets.
+		{count: 0, offset: 5, limit: -1, want: 5},
+		{count: 5, offset: 5, limit: -1, want: 10},
+		{count: 200, offset: 500, limit: -1, want: 700},
+		{count: 200, offset: 1000, limit: -1, want: -1}, // No more results.
+	}
+	for _, tt := range cursorTests {
+		wg.Add(1)
+
+		go func(count, limit, offset, want int) {
+			defer wg.Done()
+
+			// Run iterator through count calls to Next.
+			it := client.Run(ctx, q.Limit(limit).Offset(offset).KeysOnly())
+			for i := 0; i < count; i++ {
+				_, err := it.Next(nil)
+				if err == Done {
+					break
+				}
+				if err != nil {
+					t.Errorf("count=%d, limit=%d, offset=%d: it.Next failed at i=%d", count, limit, offset, i)
+					return
+				}
+			}
+
+			// Grab the cursor.
+			cursor, err := it.Cursor()
+			if err != nil {
+				t.Errorf("count=%d, limit=%d, offset=%d: it.Cursor: %v", count, limit, offset, err)
+				return
+			}
+
+			// Make a request for the next element.
+			it = client.Run(ctx, q.Limit(1).Start(cursor))
+			var entity SQChild
+			_, err = it.Next(&entity)
+			switch {
+			case want == -1:
+				if err != Done {
+					t.Errorf("count=%d, limit=%d, offset=%d: it.Next from cursor %v, want Done", count, limit, offset, err)
+				}
+			case err != nil:
+				t.Errorf("count=%d, limit=%d, offset=%d: it.Next from cursor: %v, want nil", count, limit, offset, err)
+			case entity.I != want:
+				t.Errorf("count=%d, limit=%d, offset=%d: got.I = %d, want %d", count, limit, offset, entity.I, want)
+			}
+		}(tt.count, tt.limit, tt.offset, tt.want)
+	}
+
+	wg.Wait()
+}
+
 func TestEventualConsistency(t *testing.T) {
 	if testing.Short() {
 		t.Skip("Integration tests skipped in short mode")
@@ -577,7 +741,7 @@
 		{
 			desc:    "Kindless bad filter",
 			query:   NewQuery("").Filter("I =", 4),
-			wantErr: "kind is required for filter: I",
+			wantErr: "kind is required",
 		},
 		{
 			desc:    "Kindless bad order",
@@ -770,3 +934,34 @@
 		t.Errorf("Delete: %v", err)
 	}
 }
+
+func TestNestedRepeatedElementNoIndex(t *testing.T) {
+	if testing.Short() {
+		t.Skip("Integration tests skipped in short mode")
+	}
+	ctx := context.Background()
+	client := newClient(ctx, t)
+	defer client.Close()
+
+	type Inner struct {
+		Name  string
+		Value string `datastore:",noindex"`
+	}
+	type Outer struct {
+		Config []Inner
+	}
+	m := &Outer{
+		Config: []Inner{
+			{Name: "short", Value: "a"},
+			{Name: "long", Value: strings.Repeat("a", 2000)},
+		},
+	}
+
+	key := NewKey(ctx, "Nested", "Nested"+suffix, 0, nil)
+	if _, err := client.Put(ctx, key, m); err != nil {
+		t.Fatalf("client.Put: %v", err)
+	}
+	if err := client.Delete(ctx, key); err != nil {
+		t.Fatalf("client.Delete: %v", err)
+	}
+}
diff --git a/go/src/google.golang.org/cloud/datastore/internal/proto/datastore.pb.go b/go/src/google.golang.org/cloud/datastore/internal/proto/datastore.pb.go
index 9c12e41..67ae707 100644
--- a/go/src/google.golang.org/cloud/datastore/internal/proto/datastore.pb.go
+++ b/go/src/google.golang.org/cloud/datastore/internal/proto/datastore.pb.go
@@ -126,9 +126,9 @@
 // The request for [google.datastore.v1beta3.Datastore.Lookup][google.datastore.v1beta3.Datastore.Lookup].
 type LookupRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// The options for this lookup request.
-	ReadOptions *ReadOptions `protobuf:"bytes,1,opt,name=read_options" json:"read_options,omitempty"`
+	ReadOptions *ReadOptions `protobuf:"bytes,1,opt,name=read_options,json=readOptions" json:"read_options,omitempty"`
 	// Keys of entities to look up.
 	Keys []*Key `protobuf:"bytes,3,rep,name=keys" json:"keys,omitempty"`
 }
@@ -197,14 +197,14 @@
 // The request for [google.datastore.v1beta3.Datastore.RunQuery][google.datastore.v1beta3.Datastore.RunQuery].
 type RunQueryRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// Entities are partitioned into subsets, identified by a partition ID.
 	// Queries are scoped to a single partition.
 	// This partition ID is normalized with the standard default context
 	// partition ID.
-	PartitionId *PartitionId `protobuf:"bytes,2,opt,name=partition_id" json:"partition_id,omitempty"`
+	PartitionId *PartitionId `protobuf:"bytes,2,opt,name=partition_id,json=partitionId" json:"partition_id,omitempty"`
 	// The options for this query.
-	ReadOptions *ReadOptions `protobuf:"bytes,1,opt,name=read_options" json:"read_options,omitempty"`
+	ReadOptions *ReadOptions `protobuf:"bytes,1,opt,name=read_options,json=readOptions" json:"read_options,omitempty"`
 	// The type of query.
 	//
 	// Types that are valid to be assigned to QueryType:
@@ -226,7 +226,7 @@
 	Query *Query `protobuf:"bytes,3,opt,name=query,oneof"`
 }
 type RunQueryRequest_GqlQuery struct {
-	GqlQuery *GqlQuery `protobuf:"bytes,7,opt,name=gql_query,oneof"`
+	GqlQuery *GqlQuery `protobuf:"bytes,7,opt,name=gql_query,json=gqlQuery,oneof"`
 }
 
 func (*RunQueryRequest_Query) isRunQueryRequest_QueryType()    {}
@@ -371,7 +371,7 @@
 // The request for [google.datastore.v1beta3.Datastore.BeginTransaction][google.datastore.v1beta3.Datastore.BeginTransaction].
 type BeginTransactionRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 }
 
 func (m *BeginTransactionRequest) Reset()                    { *m = BeginTransactionRequest{} }
@@ -393,7 +393,7 @@
 // The request for [google.datastore.v1beta3.Datastore.Rollback][google.datastore.v1beta3.Datastore.Rollback].
 type RollbackRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// The transaction identifier, returned by a call to
 	// [google.datastore.v1beta3.Datastore.BeginTransaction][google.datastore.v1beta3.Datastore.BeginTransaction].
 	Transaction []byte `protobuf:"bytes,1,opt,name=transaction,proto3" json:"transaction,omitempty"`
@@ -417,7 +417,7 @@
 // The request for [google.datastore.v1beta3.Datastore.Commit][google.datastore.v1beta3.Datastore.Commit].
 type CommitRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// The type of commit to perform. Defaults to `TRANSACTIONAL`.
 	Mode CommitRequest_Mode `protobuf:"varint,5,opt,name=mode,enum=google.datastore.v1beta3.CommitRequest_Mode" json:"mode,omitempty"`
 	// Must be set when mode is `TRANSACTIONAL`.
@@ -532,10 +532,10 @@
 type CommitResponse struct {
 	// The result of performing the mutations.
 	// The i-th mutation result corresponds to the i-th mutation in the request.
-	MutationResults []*MutationResult `protobuf:"bytes,3,rep,name=mutation_results" json:"mutation_results,omitempty"`
+	MutationResults []*MutationResult `protobuf:"bytes,3,rep,name=mutation_results,json=mutationResults" json:"mutation_results,omitempty"`
 	// The number of index entries updated during the commit, or zero if none were
 	// updated.
-	IndexUpdates int32 `protobuf:"varint,4,opt,name=index_updates" json:"index_updates,omitempty"`
+	IndexUpdates int32 `protobuf:"varint,4,opt,name=index_updates,json=indexUpdates" json:"index_updates,omitempty"`
 }
 
 func (m *CommitResponse) Reset()                    { *m = CommitResponse{} }
@@ -553,7 +553,7 @@
 // The request for [google.datastore.v1beta3.Datastore.AllocateIds][google.datastore.v1beta3.Datastore.AllocateIds].
 type AllocateIdsRequest struct {
 	// The ID of the project against which to make the request.
-	ProjectId string `protobuf:"bytes,8,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,8,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// A list of keys with incomplete key paths for which to allocate IDs.
 	// No key may be reserved/read-only.
 	Keys []*Key `protobuf:"bytes,1,rep,name=keys" json:"keys,omitempty"`
@@ -824,7 +824,7 @@
 }
 
 type ReadOptions_ReadConsistency_ struct {
-	ReadConsistency ReadOptions_ReadConsistency `protobuf:"varint,1,opt,name=read_consistency,enum=google.datastore.v1beta3.ReadOptions_ReadConsistency,oneof"`
+	ReadConsistency ReadOptions_ReadConsistency `protobuf:"varint,1,opt,name=read_consistency,json=readConsistency,enum=google.datastore.v1beta3.ReadOptions_ReadConsistency,oneof"`
 }
 type ReadOptions_Transaction struct {
 	Transaction []byte `protobuf:"bytes,2,opt,name=transaction,proto3,oneof"`
@@ -943,6 +943,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for Datastore service
 
 type DatastoreClient interface {
@@ -1047,76 +1051,112 @@
 	s.RegisterService(&_Datastore_serviceDesc, srv)
 }
 
-func _Datastore_Lookup_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_Lookup_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(LookupRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).Lookup(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).Lookup(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/Lookup",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).Lookup(ctx, req.(*LookupRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _Datastore_RunQuery_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_RunQuery_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(RunQueryRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).RunQuery(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).RunQuery(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/RunQuery",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).RunQuery(ctx, req.(*RunQueryRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _Datastore_BeginTransaction_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_BeginTransaction_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(BeginTransactionRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).BeginTransaction(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).BeginTransaction(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/BeginTransaction",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).BeginTransaction(ctx, req.(*BeginTransactionRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _Datastore_Commit_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_Commit_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(CommitRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).Commit(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).Commit(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/Commit",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).Commit(ctx, req.(*CommitRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _Datastore_Rollback_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_Rollback_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(RollbackRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).Rollback(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).Rollback(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/Rollback",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).Rollback(ctx, req.(*RollbackRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _Datastore_AllocateIds_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Datastore_AllocateIds_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(AllocateIdsRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(DatastoreServer).AllocateIds(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(DatastoreServer).AllocateIds(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/google.datastore.v1beta3.Datastore/AllocateIds",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(DatastoreServer).AllocateIds(ctx, req.(*AllocateIdsRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _Datastore_serviceDesc = grpc.ServiceDesc{
@@ -1152,62 +1192,67 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 907 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x56, 0x6d, 0x6f, 0xdb, 0x54,
-	0x14, 0x6e, 0xd2, 0x24, 0x4b, 0x4e, 0xda, 0x90, 0xdd, 0x6d, 0x60, 0x45, 0x20, 0x2a, 0x4b, 0x40,
-	0x29, 0x2c, 0x61, 0x99, 0x2a, 0xc4, 0x18, 0x12, 0x49, 0xea, 0x8d, 0x88, 0xd5, 0xd9, 0x92, 0x14,
-	0x89, 0x0f, 0xc8, 0x72, 0xed, 0x3b, 0x63, 0xea, 0xf8, 0xba, 0xf6, 0x35, 0x22, 0x42, 0xfc, 0x84,
-	0x7d, 0xe7, 0x6f, 0xf0, 0x77, 0xf8, 0xcc, 0xef, 0x40, 0x9c, 0x5c, 0xdb, 0x4d, 0x9c, 0xd5, 0x4e,
-	0x22, 0xbe, 0xd5, 0x27, 0xe7, 0x39, 0x2f, 0xcf, 0x73, 0xef, 0x73, 0x0b, 0xdf, 0x5a, 0x8c, 0x59,
-	0x0e, 0x6d, 0x5b, 0xcc, 0xd1, 0x5d, 0xab, 0xcd, 0x7c, 0xab, 0x63, 0x38, 0x2c, 0x34, 0x3b, 0xa6,
-	0xce, 0xf5, 0x80, 0x33, 0x9f, 0x76, 0x6c, 0x97, 0x53, 0xdf, 0xd5, 0x9d, 0x8e, 0xe7, 0x33, 0xce,
-	0x96, 0x3f, 0xb4, 0xc5, 0x37, 0x91, 0xe2, 0x0a, 0xcb, 0xf8, 0xaf, 0x8f, 0x2e, 0x29, 0xd7, 0x1f,
-	0xb7, 0xbe, 0xd9, 0xb9, 0x36, 0x75, 0xb9, 0xcd, 0xe7, 0x51, 0xe1, 0xd6, 0xd3, 0x9d, 0xe1, 0xd7,
-	0x21, 0xf5, 0x63, 0xb4, 0xfc, 0xa6, 0x00, 0x87, 0x2f, 0x18, 0xbb, 0x0a, 0xbd, 0x31, 0xc5, 0x78,
-	0xc0, 0x09, 0x01, 0xc0, 0x9f, 0x7e, 0xa1, 0x06, 0xd7, 0x6c, 0x53, 0xaa, 0x1e, 0x15, 0x8e, 0x6b,
-	0xe4, 0x6b, 0x38, 0xf0, 0xa9, 0x6e, 0x6a, 0xcc, 0xe3, 0x36, 0x73, 0x03, 0xa9, 0x80, 0xd1, 0x7a,
-	0xf7, 0xa3, 0x76, 0xd6, 0x4e, 0xed, 0x31, 0x66, 0x8f, 0xa2, 0x64, 0xf2, 0x19, 0x94, 0xae, 0xe8,
-	0x3c, 0x90, 0xf6, 0x8f, 0xf6, 0x11, 0xf4, 0x41, 0x36, 0xe8, 0x7b, 0x3a, 0x97, 0xff, 0x2a, 0x40,
-	0x23, 0x99, 0x27, 0xf0, 0x10, 0x4e, 0xc9, 0x29, 0x94, 0x5f, 0xb3, 0xd0, 0x35, 0xb1, 0xeb, 0xa2,
-	0xc0, 0xc7, 0xd9, 0x05, 0x14, 0xc1, 0x0b, 0x02, 0x43, 0x87, 0x93, 0x2f, 0xe1, 0xce, 0xcc, 0x0e,
-	0x02, 0xdb, 0xb5, 0xa4, 0xe2, 0x4e, 0xc0, 0x0e, 0x54, 0x4d, 0xfa, 0x9a, 0xfa, 0x3e, 0x35, 0xb7,
-	0x9b, 0xf9, 0xcf, 0x22, 0xbc, 0x33, 0x0e, 0xdd, 0x57, 0x0b, 0x5a, 0x37, 0xb0, 0xe8, 0xe9, 0x3e,
-	0x76, 0x42, 0x5a, 0x16, 0xd1, 0xe2, 0x26, 0x16, 0x5f, 0x26, 0xd9, 0x43, 0xf3, 0xff, 0x49, 0xf0,
-	0x05, 0x94, 0x85, 0xe8, 0xb8, 0xcf, 0x02, 0xf5, 0x61, 0x36, 0x4a, 0x2c, 0xf1, 0xdd, 0x1e, 0xb2,
-	0x57, 0xb3, 0xae, 0x1d, 0x2d, 0x42, 0xdd, 0x11, 0x28, 0x39, 0x1b, 0xf5, 0xfc, 0xda, 0x89, 0x81,
-	0xfd, 0x03, 0x00, 0x01, 0xd2, 0xf8, 0xdc, 0xa3, 0xf2, 0x1f, 0xd0, 0x5c, 0x32, 0x13, 0xeb, 0xf9,
-	0x15, 0x94, 0x2f, 0x75, 0x6e, 0xfc, 0x1c, 0xaf, 0x70, 0xb2, 0x61, 0x98, 0x48, 0x95, 0xfe, 0x02,
-	0x41, 0xda, 0xc9, 0x1e, 0xc5, 0xad, 0xf6, 0x90, 0x1f, 0xc2, 0x7b, 0x7d, 0x6a, 0xd9, 0xee, 0xd4,
-	0xd7, 0xdd, 0x40, 0x37, 0x16, 0x64, 0xe4, 0x08, 0x24, 0x77, 0x40, 0x7a, 0x3b, 0x3d, 0x9e, 0xfa,
-	0x1e, 0xd4, 0xf9, 0x32, 0x2c, 0x66, 0x3f, 0x90, 0x9f, 0xa0, 0xf0, 0xcc, 0x71, 0x2e, 0x75, 0xe3,
-	0x2a, 0x4f, 0xf8, 0x5b, 0xb1, 0x04, 0xa9, 0xb9, 0xc1, 0x46, 0x4d, 0xe4, 0x37, 0x45, 0x38, 0x1c,
-	0xb0, 0xd9, 0xcc, 0xe6, 0x79, 0xe5, 0x9e, 0x40, 0x69, 0xc6, 0x4c, 0x2a, 0x95, 0xf1, 0xab, 0xd1,
-	0xfd, 0x3c, 0x9b, 0x84, 0x54, 0xa9, 0xf6, 0x39, 0x62, 0xc8, 0x83, 0x5b, 0x46, 0x41, 0xb9, 0x4f,
-	0xa1, 0x36, 0x0b, 0xb9, 0x1e, 0x1d, 0xad, 0x8a, 0x38, 0xf4, 0x39, 0x72, 0x9f, 0xc7, 0xa9, 0xf2,
-	0x33, 0x28, 0x89, 0xaa, 0xf7, 0xa1, 0x79, 0x3e, 0x3a, 0x53, 0xb4, 0x0b, 0x75, 0xf2, 0x52, 0x19,
-	0x0c, 0x9f, 0x0d, 0x95, 0xb3, 0xe6, 0x1e, 0xb9, 0x0b, 0x87, 0xd3, 0x71, 0x4f, 0x9d, 0xf4, 0x06,
-	0xd3, 0xe1, 0x48, 0xed, 0xbd, 0x68, 0x16, 0xb0, 0xfd, 0x5d, 0x75, 0xa4, 0x6a, 0xe9, 0x70, 0xb1,
-	0xff, 0x2e, 0xdc, 0x5f, 0x99, 0x4a, 0x0b, 0xa8, 0x83, 0x0b, 0x33, 0x5f, 0xbe, 0x82, 0x46, 0xb2,
-	0x43, 0x2c, 0x43, 0x1f, 0x9a, 0xc9, 0xa0, 0x9a, 0x2f, 0x4e, 0x46, 0x62, 0x2c, 0xc7, 0x9b, 0xe7,
-	0x8d, 0x2f, 0xf8, 0x03, 0x38, 0xb4, 0x5d, 0x93, 0xfe, 0xa6, 0x85, 0x1e, 0xe6, 0xd2, 0x40, 0x2a,
-	0x21, 0x0b, 0x65, 0xf9, 0x02, 0x48, 0xcf, 0x71, 0x98, 0x81, 0xa1, 0xa1, 0x19, 0xe4, 0x09, 0x90,
-	0x38, 0x5a, 0x61, 0x1b, 0x77, 0xe8, 0xc3, 0xbd, 0x54, 0xd9, 0x78, 0x91, 0x9d, 0x6a, 0xfc, 0x53,
-	0x80, 0x6a, 0xb2, 0x04, 0xe9, 0x42, 0xc5, 0xc6, 0x0a, 0x3e, 0x17, 0x73, 0xd7, 0xbb, 0x47, 0x9b,
-	0x7c, 0x0d, 0xf5, 0x45, 0x4c, 0xb4, 0xac, 0x38, 0x34, 0x3b, 0x60, 0x44, 0x9f, 0xca, 0xd6, 0x98,
-	0x0e, 0x54, 0x4c, 0x14, 0x0f, 0xfb, 0x44, 0x9e, 0x91, 0xbf, 0x17, 0xda, 0x45, 0x1d, 0x6a, 0xcc,
-	0xa3, 0x7e, 0x74, 0x9c, 0x9e, 0x42, 0x63, 0x4d, 0xaa, 0x13, 0xd8, 0x47, 0x96, 0x62, 0xdb, 0xda,
-	0x40, 0xd2, 0xdf, 0x05, 0xa8, 0xaf, 0x9a, 0xde, 0x2b, 0x68, 0x0a, 0xc7, 0x34, 0xf0, 0xc3, 0x0e,
-	0x38, 0x75, 0x8d, 0xb9, 0x38, 0xef, 0x8d, 0xee, 0xe9, 0x56, 0xae, 0x29, 0xfe, 0x1e, 0x2c, 0xc1,
-	0xb8, 0xde, 0xda, 0xed, 0x29, 0x46, 0xb7, 0x47, 0x3e, 0x47, 0x1b, 0x48, 0xe7, 0x92, 0x23, 0x78,
-	0x7f, 0xac, 0xf4, 0xce, 0xb4, 0xc1, 0x48, 0x9d, 0x0c, 0x27, 0x53, 0x45, 0x1d, 0xfc, 0xb8, 0x76,
-	0x3b, 0x00, 0x2a, 0x93, 0xe9, 0x78, 0xa4, 0x3e, 0xc7, 0x6b, 0x71, 0x00, 0x55, 0xe5, 0x07, 0x45,
-	0x9d, 0x5e, 0x88, 0xdb, 0x80, 0xce, 0xb0, 0x32, 0xb3, 0x30, 0xd2, 0xee, 0xbf, 0x25, 0xa8, 0x9d,
-	0x25, 0xd3, 0x92, 0x9f, 0xa0, 0x12, 0x3d, 0x92, 0xe4, 0x93, 0xec, 0x55, 0x52, 0xcf, 0x7a, 0xeb,
-	0x78, 0x73, 0x62, 0x6c, 0x42, 0x7b, 0xc4, 0x80, 0x6a, 0xe2, 0xda, 0xe4, 0xd3, 0x1c, 0xae, 0xd2,
-	0x6f, 0x5e, 0xeb, 0x64, 0x9b, 0xd4, 0x9b, 0x26, 0xbf, 0x43, 0x73, 0xdd, 0x6c, 0xc9, 0xa3, 0xec,
-	0x0a, 0x19, 0x3e, 0xde, 0xea, 0xee, 0x02, 0xb9, 0x69, 0x8e, 0x04, 0x46, 0xc6, 0x92, 0x47, 0x60,
-	0xca, 0x3e, 0xf3, 0x08, 0x4c, 0x7b, 0x54, 0x4c, 0x60, 0xec, 0xed, 0xb9, 0x04, 0xa6, 0xdf, 0x8e,
-	0x5c, 0x02, 0xd7, 0x9f, 0x8a, 0x3d, 0xe2, 0x40, 0x7d, 0xc5, 0x58, 0x48, 0xce, 0x3b, 0xf0, 0xb6,
-	0xad, 0xb5, 0x1e, 0x6e, 0x99, 0x9d, 0x74, 0xbb, 0xac, 0x88, 0xff, 0x17, 0x1f, 0xff, 0x17, 0x00,
-	0x00, 0xff, 0xff, 0x51, 0xbf, 0xf8, 0x2e, 0x0a, 0x0b, 0x00, 0x00,
+	// 992 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x56, 0x5f, 0x8f, 0xe2, 0x54,
+	0x14, 0xa7, 0x0c, 0xb0, 0x70, 0x60, 0xa0, 0x7b, 0x5d, 0xb5, 0x21, 0x6e, 0x24, 0x35, 0x2a, 0x6e,
+	0x14, 0x32, 0x6c, 0x36, 0xab, 0x66, 0x4c, 0xf8, 0x33, 0xec, 0x42, 0xdc, 0x81, 0xf5, 0xc2, 0x98,
+	0xf8, 0xb0, 0x69, 0x4a, 0x7b, 0x07, 0xeb, 0x94, 0xde, 0x4e, 0x7b, 0x31, 0x12, 0x5f, 0x7d, 0xd2,
+	0x8f, 0xe2, 0xbb, 0x5f, 0xc2, 0xc4, 0xcf, 0xe3, 0x93, 0x31, 0xbd, 0x6d, 0xa7, 0x94, 0xdd, 0x42,
+	0x31, 0xbe, 0xd1, 0xc3, 0xf9, 0x9d, 0x73, 0x7e, 0xbf, 0x7b, 0xee, 0x39, 0x17, 0xba, 0x4b, 0x4a,
+	0x97, 0x26, 0x69, 0x2d, 0xa9, 0xa9, 0x5a, 0xcb, 0x16, 0x75, 0x96, 0x6d, 0xcd, 0xa4, 0x6b, 0xbd,
+	0xad, 0xab, 0x4c, 0x75, 0x19, 0x75, 0x48, 0xdb, 0xb0, 0x18, 0x71, 0x2c, 0xd5, 0x6c, 0xdb, 0x0e,
+	0x65, 0x34, 0xfa, 0xa3, 0xc5, 0xbf, 0x91, 0x14, 0x44, 0x88, 0xec, 0x3f, 0x9e, 0x2d, 0x08, 0x53,
+	0x1f, 0xd7, 0xbf, 0x3a, 0x3a, 0x36, 0xb1, 0x98, 0xc1, 0x36, 0x7e, 0xe0, 0xfa, 0xf9, 0xd1, 0xf0,
+	0xdb, 0x35, 0x71, 0x02, 0xb4, 0xfc, 0xbb, 0x00, 0xa7, 0x2f, 0x28, 0xbd, 0x59, 0xdb, 0x98, 0xdc,
+	0xae, 0x89, 0xcb, 0xd0, 0x43, 0x00, 0xdb, 0xa1, 0x3f, 0x10, 0x8d, 0x29, 0x86, 0x2e, 0x15, 0x1b,
+	0x42, 0xb3, 0x84, 0x4b, 0x81, 0x65, 0xac, 0xa3, 0x11, 0x54, 0x1c, 0xa2, 0xea, 0x0a, 0xb5, 0x99,
+	0x41, 0x2d, 0x57, 0x12, 0x1a, 0x42, 0xb3, 0xdc, 0xf9, 0xb0, 0x95, 0x44, 0xaf, 0x85, 0x89, 0xaa,
+	0x4f, 0x7d, 0x67, 0x5c, 0x76, 0xa2, 0x0f, 0x74, 0x06, 0xb9, 0x1b, 0xb2, 0x71, 0xa5, 0x93, 0xc6,
+	0x49, 0xb3, 0xdc, 0x79, 0x98, 0x1c, 0xe1, 0x6b, 0xb2, 0xc1, 0xdc, 0x55, 0xfe, 0x53, 0x80, 0x6a,
+	0x58, 0xad, 0x6b, 0x53, 0xcb, 0x25, 0xe8, 0x1c, 0xf2, 0xd7, 0x74, 0x6d, 0xe9, 0x92, 0xc0, 0xc3,
+	0x7c, 0x94, 0x1c, 0x66, 0xc8, 0x55, 0xc3, 0xc4, 0x5d, 0x9b, 0x0c, 0xfb, 0x20, 0xd4, 0x85, 0x7b,
+	0x2b, 0xc3, 0x75, 0x0d, 0x6b, 0x29, 0x65, 0x8f, 0xc2, 0x87, 0x30, 0xf4, 0x05, 0x14, 0x75, 0x72,
+	0x4d, 0x1c, 0x87, 0xe8, 0xe9, 0x98, 0xdc, 0xb9, 0xcb, 0x7f, 0x65, 0xa1, 0x86, 0xd7, 0xd6, 0x37,
+	0xde, 0x71, 0xa4, 0x57, 0xdf, 0x56, 0x1d, 0x66, 0x78, 0x0a, 0x7a, 0x0e, 0xd9, 0x43, 0xea, 0xbf,
+	0x0c, 0xbd, 0xc7, 0x3a, 0x2e, 0xdb, 0xd1, 0xc7, 0xff, 0x78, 0x8e, 0x4f, 0x21, 0xcf, 0x3b, 0x4a,
+	0x3a, 0xe1, 0x21, 0xde, 0x4f, 0x0e, 0xc1, 0x99, 0x8e, 0x32, 0xd8, 0xf7, 0x47, 0x3d, 0x28, 0x2d,
+	0x6f, 0x4d, 0xc5, 0x07, 0xdf, 0xe3, 0x60, 0x39, 0x19, 0xfc, 0xfc, 0xd6, 0x0c, 0xf1, 0xc5, 0x65,
+	0xf0, 0xbb, 0x5f, 0x01, 0xe0, 0x70, 0x85, 0x6d, 0x6c, 0x22, 0xff, 0x26, 0x80, 0x18, 0x09, 0x1a,
+	0x34, 0x48, 0x17, 0xf2, 0x0b, 0x95, 0x69, 0xdf, 0x07, 0x0c, 0x1f, 0x1d, 0x28, 0xcf, 0x3f, 0xdf,
+	0xbe, 0x87, 0xc0, 0x3e, 0x10, 0x3d, 0x09, 0x09, 0x66, 0x53, 0x11, 0x0c, 0xe8, 0xc9, 0x9f, 0xc3,
+	0xbb, 0x7d, 0xb2, 0x34, 0xac, 0xb9, 0xa3, 0x5a, 0xae, 0xaa, 0x79, 0x62, 0xa5, 0x3b, 0x65, 0xf9,
+	0x1c, 0xa4, 0xd7, 0x91, 0x01, 0x9d, 0x06, 0x94, 0x59, 0x64, 0xe6, 0xa4, 0x2a, 0x78, 0xdb, 0x24,
+	0x63, 0xa8, 0x61, 0x6a, 0x9a, 0x0b, 0x55, 0xbb, 0x49, 0xd9, 0x55, 0x87, 0x63, 0x22, 0x10, 0xa3,
+	0x98, 0x7e, 0x25, 0xf2, 0x1f, 0x59, 0x38, 0x1d, 0xd0, 0xd5, 0xca, 0x60, 0x29, 0xd3, 0x74, 0x21,
+	0xb7, 0xa2, 0x3a, 0x91, 0xf2, 0x0d, 0xa1, 0x59, 0xed, 0x7c, 0x9a, 0x2c, 0x63, 0x2c, 0x6a, 0xeb,
+	0x92, 0xea, 0x04, 0x73, 0x24, 0x92, 0xdf, 0x50, 0xe8, 0x28, 0x13, 0x2b, 0x15, 0x75, 0xa1, 0xb4,
+	0x5a, 0x33, 0xd5, 0xef, 0xea, 0x02, 0xbf, 0x91, 0x7b, 0xba, 0xea, 0x32, 0x70, 0xc5, 0x11, 0x48,
+	0x7e, 0x06, 0x39, 0x2f, 0x27, 0x7a, 0x00, 0xe2, 0xe5, 0xf4, 0x62, 0xa8, 0x5c, 0x4d, 0x66, 0x2f,
+	0x87, 0x83, 0xf1, 0xb3, 0xf1, 0xf0, 0x42, 0xcc, 0xa0, 0xfb, 0x70, 0x3a, 0xc7, 0xbd, 0xc9, 0xac,
+	0x37, 0x98, 0x8f, 0xa7, 0x93, 0xde, 0x0b, 0x51, 0x40, 0x6f, 0xc3, 0xfd, 0xc9, 0x74, 0xa2, 0xc4,
+	0xcd, 0xd9, 0xfe, 0x3b, 0xf0, 0x60, 0xab, 0x30, 0xc5, 0x25, 0x26, 0xd1, 0x18, 0x75, 0xe4, 0x5f,
+	0x05, 0xa8, 0x86, 0x14, 0x83, 0x53, 0x9d, 0x81, 0x18, 0xe6, 0x57, 0x1c, 0xde, 0x81, 0xe1, 0x5c,
+	0x6c, 0xa6, 0xa8, 0xdd, 0x1f, 0x49, 0xb5, 0x55, 0xec, 0xdb, 0x45, 0x1f, 0xc0, 0xa9, 0x61, 0xe9,
+	0xe4, 0x27, 0x65, 0x6d, 0xeb, 0x2a, 0x23, 0xae, 0x94, 0x6b, 0x08, 0xcd, 0x3c, 0xae, 0x70, 0xe3,
+	0x95, 0x6f, 0x93, 0xaf, 0x01, 0xf5, 0x4c, 0x93, 0x6a, 0x2a, 0x23, 0x63, 0xdd, 0x4d, 0x79, 0x92,
+	0xe1, 0xe8, 0x16, 0xd2, 0x8f, 0xee, 0x11, 0xbc, 0x15, 0xcb, 0x13, 0x10, 0xff, 0x0f, 0x91, 0x7e,
+	0xc9, 0x42, 0x31, 0xa4, 0x8e, 0xbe, 0x84, 0x82, 0x61, 0xb9, 0xc4, 0x61, 0x9c, 0x5c, 0xb9, 0xd3,
+	0x38, 0x34, 0xbf, 0x47, 0x19, 0x1c, 0x20, 0x3c, 0xac, 0xaf, 0x0c, 0xef, 0xc8, 0x94, 0x58, 0x1f,
+	0xe1, 0x63, 0x79, 0xde, 0xc2, 0x31, 0x58, 0x9e, 0xf7, 0x29, 0x14, 0x74, 0x62, 0x12, 0x46, 0x82,
+	0xa1, 0xb7, 0x9f, 0xb5, 0x07, 0xf4, 0xdd, 0xfb, 0x65, 0x28, 0x51, 0x9b, 0x38, 0x9c, 0xb9, 0xdc,
+	0x83, 0x6a, 0xbc, 0x01, 0x50, 0x1b, 0x4e, 0x6e, 0x48, 0x38, 0x86, 0x0f, 0x48, 0xe9, 0x79, 0xca,
+	0x7f, 0x0b, 0x50, 0xde, 0x1a, 0xeb, 0x68, 0x01, 0x22, 0xdf, 0x09, 0x1a, 0xb5, 0x5c, 0xc3, 0x65,
+	0xc4, 0xd2, 0x36, 0xfc, 0x8e, 0x55, 0x3b, 0x4f, 0x52, 0xed, 0x05, 0xfe, 0x7b, 0x10, 0x81, 0x47,
+	0x19, 0x5c, 0x73, 0xe2, 0xa6, 0xdd, 0x2b, 0x9c, 0x7d, 0xc3, 0x15, 0x96, 0x2f, 0xa1, 0xb6, 0x13,
+	0x09, 0x35, 0xe0, 0x3d, 0x3c, 0xec, 0x5d, 0x28, 0x83, 0xe9, 0x64, 0x36, 0x9e, 0xcd, 0x87, 0x93,
+	0xc1, 0x77, 0x3b, 0xf7, 0x12, 0xa0, 0x30, 0x9b, 0xe3, 0xe9, 0xe4, 0xb9, 0x28, 0xa0, 0x0a, 0x14,
+	0x87, 0xdf, 0x0e, 0x27, 0xf3, 0x2b, 0x7e, 0x0f, 0x11, 0x88, 0x5b, 0x8c, 0xf8, 0xaa, 0xe8, 0xfc,
+	0x93, 0x83, 0xd2, 0x45, 0xc8, 0x05, 0xbd, 0x82, 0x82, 0xff, 0xac, 0x40, 0x1f, 0x27, 0x13, 0x8d,
+	0x3d, 0x93, 0xea, 0xcd, 0xc3, 0x8e, 0xc1, 0x9c, 0xcc, 0x20, 0x0d, 0x8a, 0xe1, 0x5a, 0x42, 0x9f,
+	0xec, 0x51, 0x32, 0xfe, 0x16, 0xa8, 0x3f, 0x4a, 0xe3, 0x7a, 0x97, 0xe4, 0x67, 0x10, 0x77, 0x97,
+	0x06, 0x3a, 0x4b, 0x8e, 0x90, 0xb0, 0x9a, 0xea, 0x9d, 0x63, 0x20, 0x77, 0xc9, 0x5f, 0x41, 0xc1,
+	0x9f, 0x68, 0xfb, 0x04, 0x8c, 0x8d, 0xf5, 0x7d, 0x02, 0xc6, 0x87, 0x63, 0x20, 0x60, 0xb0, 0x7e,
+	0xf6, 0x0a, 0x18, 0x5f, 0x7b, 0x7b, 0x05, 0xdc, 0xdd, 0x66, 0x19, 0x64, 0x42, 0x79, 0x6b, 0x42,
+	0xa1, 0x3d, 0xfb, 0xe9, 0xf5, 0x81, 0x59, 0xff, 0x2c, 0xa5, 0x77, 0x98, 0x6d, 0x51, 0xe0, 0xef,
+	0xef, 0xc7, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xb4, 0x06, 0x32, 0x1d, 0x5a, 0x0c, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/datastore/internal/proto/entity.pb.go b/go/src/google.golang.org/cloud/datastore/internal/proto/entity.pb.go
index 42a5802..6acf1da 100644
--- a/go/src/google.golang.org/cloud/datastore/internal/proto/entity.pb.go
+++ b/go/src/google.golang.org/cloud/datastore/internal/proto/entity.pb.go
@@ -37,9 +37,9 @@
 // Reads and writes of foreign partition IDs may fail if the project is not in an active state.
 type PartitionId struct {
 	// The ID of the project to which the entities belong.
-	ProjectId string `protobuf:"bytes,2,opt,name=project_id" json:"project_id,omitempty"`
+	ProjectId string `protobuf:"bytes,2,opt,name=project_id,json=projectId" json:"project_id,omitempty"`
 	// If not empty, the ID of the namespace to which the entities belong.
-	NamespaceId string `protobuf:"bytes,4,opt,name=namespace_id" json:"namespace_id,omitempty"`
+	NamespaceId string `protobuf:"bytes,4,opt,name=namespace_id,json=namespaceId" json:"namespace_id,omitempty"`
 }
 
 func (m *PartitionId) Reset()                    { *m = PartitionId{} }
@@ -55,7 +55,7 @@
 	// Entities are partitioned into subsets, currently identified by a project
 	// ID and namespace ID.
 	// Queries are scoped to a single partition.
-	PartitionId *PartitionId `protobuf:"bytes,1,opt,name=partition_id" json:"partition_id,omitempty"`
+	PartitionId *PartitionId `protobuf:"bytes,1,opt,name=partition_id,json=partitionId" json:"partition_id,omitempty"`
 	// The entity path.
 	// An entity path consists of one or more elements composed of a kind and a
 	// string or numerical identifier, which identify entities. The first
@@ -259,7 +259,7 @@
 	Meaning int32 `protobuf:"varint,14,opt,name=meaning" json:"meaning,omitempty"`
 	// If the value should be excluded from all indexes including those defined
 	// explicitly.
-	ExcludeFromIndexes bool `protobuf:"varint,19,opt,name=exclude_from_indexes" json:"exclude_from_indexes,omitempty"`
+	ExcludeFromIndexes bool `protobuf:"varint,19,opt,name=exclude_from_indexes,json=excludeFromIndexes" json:"exclude_from_indexes,omitempty"`
 }
 
 func (m *Value) Reset()                    { *m = Value{} }
@@ -272,37 +272,37 @@
 }
 
 type Value_NullValue struct {
-	NullValue google_protobuf.NullValue `protobuf:"varint,11,opt,name=null_value,enum=google.protobuf.NullValue,oneof"`
+	NullValue google_protobuf.NullValue `protobuf:"varint,11,opt,name=null_value,json=nullValue,enum=google.protobuf.NullValue,oneof"`
 }
 type Value_BooleanValue struct {
-	BooleanValue bool `protobuf:"varint,1,opt,name=boolean_value,oneof"`
+	BooleanValue bool `protobuf:"varint,1,opt,name=boolean_value,json=booleanValue,oneof"`
 }
 type Value_IntegerValue struct {
-	IntegerValue int64 `protobuf:"varint,2,opt,name=integer_value,oneof"`
+	IntegerValue int64 `protobuf:"varint,2,opt,name=integer_value,json=integerValue,oneof"`
 }
 type Value_DoubleValue struct {
-	DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,oneof"`
+	DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,json=doubleValue,oneof"`
 }
 type Value_TimestampValue struct {
-	TimestampValue *google_protobuf1.Timestamp `protobuf:"bytes,10,opt,name=timestamp_value,oneof"`
+	TimestampValue *google_protobuf1.Timestamp `protobuf:"bytes,10,opt,name=timestamp_value,json=timestampValue,oneof"`
 }
 type Value_KeyValue struct {
-	KeyValue *Key `protobuf:"bytes,5,opt,name=key_value,oneof"`
+	KeyValue *Key `protobuf:"bytes,5,opt,name=key_value,json=keyValue,oneof"`
 }
 type Value_StringValue struct {
-	StringValue string `protobuf:"bytes,17,opt,name=string_value,oneof"`
+	StringValue string `protobuf:"bytes,17,opt,name=string_value,json=stringValue,oneof"`
 }
 type Value_BlobValue struct {
-	BlobValue []byte `protobuf:"bytes,18,opt,name=blob_value,proto3,oneof"`
+	BlobValue []byte `protobuf:"bytes,18,opt,name=blob_value,json=blobValue,proto3,oneof"`
 }
 type Value_GeoPointValue struct {
-	GeoPointValue *google_type.LatLng `protobuf:"bytes,8,opt,name=geo_point_value,oneof"`
+	GeoPointValue *google_type.LatLng `protobuf:"bytes,8,opt,name=geo_point_value,json=geoPointValue,oneof"`
 }
 type Value_EntityValue struct {
-	EntityValue *Entity `protobuf:"bytes,6,opt,name=entity_value,oneof"`
+	EntityValue *Entity `protobuf:"bytes,6,opt,name=entity_value,json=entityValue,oneof"`
 }
 type Value_ArrayValue struct {
-	ArrayValue *ArrayValue `protobuf:"bytes,9,opt,name=array_value,oneof"`
+	ArrayValue *ArrayValue `protobuf:"bytes,9,opt,name=array_value,json=arrayValue,oneof"`
 }
 
 func (*Value_NullValue) isValue_ValueType()      {}
@@ -673,43 +673,50 @@
 }
 
 var fileDescriptor1 = []byte{
-	// 602 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x93, 0xcf, 0x6e, 0xd3, 0x40,
-	0x10, 0xc6, 0xeb, 0x3a, 0x49, 0xeb, 0x89, 0x49, 0xc4, 0xb6, 0x82, 0x28, 0x2a, 0x6a, 0x15, 0x81,
-	0x04, 0x1c, 0xec, 0x92, 0x0a, 0xb5, 0xa2, 0xea, 0x81, 0xa8, 0x95, 0x8a, 0xa8, 0x50, 0x85, 0x10,
-	0x57, 0x6b, 0x1d, 0x6f, 0x5d, 0xd3, 0x8d, 0xd7, 0x72, 0xd6, 0x55, 0x7d, 0xe3, 0xd5, 0x78, 0x01,
-	0x9e, 0x89, 0xd9, 0x3f, 0x4e, 0x50, 0x51, 0x5a, 0x71, 0xf3, 0xce, 0xcc, 0xcf, 0xf3, 0xed, 0xcc,
-	0xb7, 0x70, 0x92, 0x0a, 0x91, 0x72, 0x16, 0xa4, 0x82, 0xd3, 0x3c, 0x0d, 0x44, 0x99, 0x86, 0x53,
-	0x2e, 0xaa, 0x24, 0x4c, 0xa8, 0xa4, 0x73, 0x29, 0x4a, 0x16, 0x66, 0xb9, 0x64, 0x65, 0x4e, 0x79,
-	0x58, 0x94, 0x42, 0x8a, 0x90, 0xe5, 0x32, 0x93, 0x75, 0xa0, 0x0f, 0x64, 0x60, 0xf1, 0x45, 0x75,
-	0x70, 0xfb, 0x2e, 0x66, 0x92, 0x1e, 0x0c, 0x77, 0x4c, 0xc6, 0x40, 0x71, 0x75, 0x15, 0xce, 0x65,
-	0x59, 0x4d, 0xa5, 0xe1, 0x86, 0xbb, 0xf7, 0xb3, 0x32, 0x9b, 0xb1, 0xb9, 0xa4, 0xb3, 0xc2, 0x16,
-	0x4c, 0xfe, 0x43, 0x97, 0xac, 0x0b, 0x16, 0x19, 0x71, 0x9c, 0x4a, 0x8e, 0xe5, 0xfa, 0x30, 0x3a,
-	0x84, 0xee, 0x25, 0x2d, 0x51, 0x6d, 0x26, 0xf2, 0x4f, 0x09, 0x21, 0x00, 0x18, 0xff, 0xc1, 0xa6,
-	0x32, 0xca, 0x92, 0xc1, 0xfa, 0x9e, 0xf3, 0xda, 0x23, 0xdb, 0xe0, 0xe7, 0x14, 0x3b, 0x17, 0x74,
-	0xca, 0x54, 0xb4, 0xa5, 0xa2, 0xa3, 0x5f, 0x0e, 0xb8, 0x9f, 0x59, 0x4d, 0x8e, 0xc1, 0x2f, 0x9a,
-	0x1f, 0xa8, 0xac, 0x83, 0xd9, 0xee, 0xf8, 0x55, 0xb0, 0xea, 0xd2, 0xc1, 0xdf, 0xed, 0x0e, 0xa1,
-	0x55, 0x50, 0x79, 0x8d, 0x8d, 0x5c, 0x84, 0xde, 0xac, 0x86, 0xb0, 0x13, 0x82, 0xf2, 0xfa, 0x8c,
-	0xb3, 0x19, 0x0e, 0x77, 0x78, 0xaa, 0x64, 0x2f, 0x8e, 0xc4, 0x87, 0xd6, 0x4d, 0x96, 0x9b, 0xe6,
-	0x1e, 0x9e, 0xd6, 0xad, 0x78, 0xf7, 0x7c, 0x8d, 0xf4, 0xa0, 0xa5, 0xe4, 0x0f, 0x5c, 0x95, 0x3b,
-	0x5f, 0x9b, 0x78, 0xb0, 0x91, 0x25, 0x91, 0x9a, 0xc7, 0xe8, 0x04, 0xe0, 0x63, 0x59, 0xd2, 0xfa,
-	0x3b, 0xe5, 0x15, 0x23, 0x21, 0x74, 0x6e, 0xd5, 0xc7, 0x1c, 0x7f, 0xa3, 0xe4, 0xec, 0xae, 0x96,
-	0xa3, 0x81, 0xd1, 0xcf, 0x16, 0xb4, 0x0d, 0xba, 0x0f, 0x90, 0x57, 0x9c, 0x47, 0x9a, 0x1f, 0x74,
-	0xb1, 0x53, 0x6f, 0x3c, 0x6c, 0xf0, 0x66, 0x7f, 0xc1, 0x17, 0x2c, 0xd1, 0xf5, 0xa8, 0xea, 0x39,
-	0x3c, 0x89, 0x85, 0xe0, 0x8c, 0xe6, 0x16, 0x52, 0xd2, 0x37, 0x4d, 0x42, 0x2d, 0x2d, 0x65, 0xa5,
-	0x4d, 0x34, 0xf7, 0x78, 0x06, 0x7e, 0x22, 0xaa, 0x98, 0x33, 0x1b, 0x57, 0xf7, 0x71, 0x30, 0xfe,
-	0x1e, 0xfa, 0x0b, 0x63, 0xd8, 0x14, 0xe8, 0x1d, 0xfc, 0x2b, 0xe0, 0x5b, 0x53, 0x87, 0xd8, 0x18,
-	0xbc, 0x1b, 0x56, 0x5b, 0xa0, 0xad, 0x81, 0x17, 0x0f, 0xce, 0xdf, 0x48, 0x40, 0x87, 0x66, 0x79,
-	0x6a, 0xb1, 0xa7, 0x66, 0xa4, 0xe8, 0x10, 0x88, 0xb9, 0x88, 0x6d, 0x94, 0x60, 0xd4, 0xc7, 0x68,
-	0x00, 0xfd, 0x94, 0x89, 0xa8, 0x10, 0x78, 0x1f, 0x9b, 0xda, 0xd4, 0x7d, 0xb6, 0x9a, 0x3e, 0x6a,
-	0x09, 0xc1, 0x05, 0x95, 0x17, 0x79, 0x8a, 0xf5, 0x47, 0xe0, 0x9b, 0x77, 0x63, 0x8b, 0x3b, 0xba,
-	0x78, 0x6f, 0xb5, 0xa8, 0x33, 0x5d, 0x8d, 0xe4, 0x31, 0x74, 0xa9, 0xda, 0xa3, 0x05, 0x3d, 0x0d,
-	0xbe, 0x5c, 0x0d, 0x2e, 0x97, 0x8e, 0x70, 0x1f, 0x36, 0x66, 0xb8, 0x06, 0xbc, 0xd5, 0xa0, 0x87,
-	0x60, 0x9b, 0xec, 0xc0, 0x36, 0xbb, 0x9b, 0xf2, 0x2a, 0x61, 0xd1, 0x55, 0x29, 0x66, 0x11, 0x1a,
-	0x8b, 0xdd, 0xa1, 0x2b, 0xb6, 0xd4, 0x86, 0x26, 0x3e, 0x80, 0xee, 0x62, 0x1c, 0xf4, 0xdb, 0x81,
-	0x8e, 0x91, 0x41, 0xde, 0x82, 0x8b, 0x03, 0xb5, 0xfe, 0x7f, 0x78, 0x94, 0xe4, 0x54, 0x3f, 0xb3,
-	0x82, 0xe1, 0x4b, 0xc0, 0x1f, 0xbb, 0xda, 0x6e, 0xfb, 0x8f, 0x5d, 0x34, 0xb8, 0x5c, 0x20, 0x18,
-	0x28, 0xeb, 0xe1, 0x57, 0xe8, 0xdf, 0x0b, 0x91, 0xee, 0x52, 0x84, 0x87, 0x0b, 0x68, 0x2f, 0x2d,
-	0xf4, 0xb8, 0x9f, 0x3f, 0xac, 0x1f, 0x39, 0x71, 0x47, 0x9b, 0xe5, 0xe0, 0x4f, 0x00, 0x00, 0x00,
-	0xff, 0xff, 0xc9, 0x4f, 0x01, 0x68, 0xf4, 0x04, 0x00, 0x00,
+	// 716 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x94, 0x94, 0xdd, 0x6e, 0xeb, 0x44,
+	0x10, 0xc7, 0xed, 0x7c, 0x9d, 0x7a, 0xec, 0xd3, 0x1e, 0xb6, 0xbd, 0xb0, 0x22, 0xaa, 0x86, 0x40,
+	0xa5, 0x70, 0xe3, 0x94, 0x56, 0x08, 0x04, 0xf4, 0x82, 0x4a, 0x01, 0x47, 0xad, 0x20, 0x5a, 0x55,
+	0x5c, 0x12, 0x6d, 0xe2, 0xad, 0x6b, 0xb2, 0xd9, 0xb5, 0xec, 0x75, 0x55, 0x3f, 0x13, 0xaf, 0xc5,
+	0x1d, 0x2f, 0x81, 0xf6, 0xc3, 0x4e, 0x55, 0x94, 0xc2, 0xb9, 0xf3, 0xce, 0xfc, 0xe6, 0x9f, 0xff,
+	0xcc, 0xce, 0x06, 0xae, 0x53, 0x21, 0x52, 0x46, 0xa3, 0x54, 0x30, 0xc2, 0xd3, 0x48, 0x14, 0xe9,
+	0x74, 0xcd, 0x44, 0x95, 0x4c, 0x13, 0x22, 0x49, 0x29, 0x45, 0x41, 0xa7, 0x19, 0x97, 0xb4, 0xe0,
+	0x84, 0x4d, 0xf3, 0x42, 0x48, 0x31, 0xa5, 0x5c, 0x66, 0xb2, 0x8e, 0xf4, 0x01, 0x85, 0xb6, 0xbc,
+	0xa5, 0xa3, 0xa7, 0xaf, 0x56, 0x54, 0x92, 0xab, 0xe1, 0xa7, 0x26, 0x63, 0x8a, 0x56, 0xd5, 0xc3,
+	0xb4, 0x94, 0x45, 0xb5, 0x96, 0xa6, 0x6e, 0x78, 0xf6, 0x3a, 0x2b, 0xb3, 0x2d, 0x2d, 0x25, 0xd9,
+	0xe6, 0x16, 0xb8, 0xf9, 0x08, 0x5f, 0xb2, 0xce, 0xe9, 0xd2, 0x98, 0x63, 0x44, 0x32, 0x9e, 0x1a,
+	0x8d, 0xf1, 0xaf, 0xe0, 0x2f, 0x48, 0x21, 0x33, 0x99, 0x09, 0x3e, 0x4f, 0xd0, 0x29, 0x40, 0x5e,
+	0x88, 0x3f, 0xe8, 0x5a, 0x2e, 0xb3, 0x24, 0xec, 0x8c, 0xdc, 0x89, 0x87, 0x3d, 0x1b, 0x99, 0x27,
+	0xe8, 0x33, 0x08, 0x38, 0xd9, 0xd2, 0x32, 0x27, 0x6b, 0xaa, 0x80, 0x9e, 0x06, 0xfc, 0x36, 0x36,
+	0x4f, 0xc6, 0x7f, 0xb9, 0xd0, 0xbd, 0xa5, 0x35, 0x8a, 0x21, 0xc8, 0x1b, 0x61, 0x85, 0xba, 0x23,
+	0x77, 0xe2, 0x5f, 0x9e, 0x47, 0xfb, 0x86, 0x11, 0xbd, 0xb0, 0x81, 0xfd, 0xfc, 0x85, 0xa7, 0x6b,
+	0xe8, 0xe5, 0x44, 0x3e, 0x86, 0x9d, 0x51, 0x77, 0xe2, 0x5f, 0x7e, 0xb9, 0x5f, 0xe1, 0x96, 0xd6,
+	0xd1, 0x82, 0xc8, 0xc7, 0x19, 0xa3, 0x5b, 0xca, 0x25, 0xd6, 0x65, 0xc3, 0x7b, 0xd5, 0x61, 0x1b,
+	0x44, 0x08, 0x7a, 0x9b, 0x8c, 0x1b, 0x3f, 0x1e, 0xd6, 0xdf, 0xe8, 0x03, 0x74, 0x6c, 0xb7, 0xdd,
+	0xd8, 0xc1, 0x9d, 0x2c, 0x41, 0x27, 0xd0, 0x53, 0x4d, 0x85, 0x5d, 0x45, 0xc5, 0x0e, 0xd6, 0xa7,
+	0x1b, 0x0f, 0xde, 0x65, 0xc9, 0x52, 0x8d, 0x72, 0x3c, 0x03, 0xf8, 0xb1, 0x28, 0x48, 0xfd, 0x1b,
+	0x61, 0x15, 0x45, 0xdf, 0xc0, 0xe0, 0x49, 0x7d, 0x94, 0xa1, 0xab, 0x4d, 0x9e, 0xed, 0x37, 0xa9,
+	0x0b, 0xb0, 0xc5, 0xc7, 0x7f, 0xf6, 0xa1, 0x6f, 0x24, 0xbe, 0x07, 0xe0, 0x15, 0x63, 0x4b, 0x9d,
+	0x08, 0xfd, 0x91, 0x3b, 0x39, 0xbc, 0x1c, 0x36, 0x32, 0xcd, 0x0a, 0x44, 0xbf, 0x54, 0x8c, 0x69,
+	0x3e, 0x76, 0xb0, 0xc7, 0x9b, 0x03, 0x3a, 0x87, 0xf7, 0x2b, 0x21, 0x18, 0x25, 0xdc, 0xd6, 0xab,
+	0xee, 0x0e, 0x62, 0x07, 0x07, 0x36, 0xdc, 0x62, 0x6a, 0x21, 0x52, 0x5a, 0x58, 0xac, 0x69, 0x39,
+	0xb0, 0x61, 0x83, 0x7d, 0x0e, 0x41, 0x22, 0xaa, 0x15, 0xa3, 0x96, 0x52, 0x43, 0x70, 0x63, 0x07,
+	0xfb, 0x26, 0x6a, 0xa0, 0x19, 0x1c, 0xb5, 0xfb, 0x68, 0x39, 0xd0, 0x57, 0xfc, 0x6f, 0xd3, 0xf7,
+	0x0d, 0x17, 0x3b, 0xf8, 0xb0, 0x2d, 0x32, 0x32, 0x3f, 0x80, 0xb7, 0xa1, 0xb5, 0x15, 0xe8, 0x6b,
+	0x81, 0xd3, 0x37, 0x6f, 0x38, 0x76, 0xf0, 0xc1, 0x86, 0xd6, 0xad, 0xd3, 0x52, 0x16, 0x19, 0x4f,
+	0xad, 0xc0, 0x27, 0xf6, 0xba, 0x7c, 0x13, 0x35, 0xd0, 0x19, 0xc0, 0x8a, 0x89, 0x95, 0x45, 0xd0,
+	0xc8, 0x9d, 0x04, 0x6a, 0x7a, 0x2a, 0x66, 0x80, 0x6b, 0x38, 0x4a, 0xa9, 0x58, 0xe6, 0x22, 0xe3,
+	0xd2, 0x52, 0x07, 0xda, 0xc9, 0x71, 0xe3, 0x44, 0x5d, 0x79, 0x74, 0x47, 0xe4, 0x1d, 0x4f, 0x63,
+	0x07, 0xbf, 0x4f, 0xa9, 0x58, 0x28, 0xb8, 0x99, 0x44, 0x60, 0xde, 0xbb, 0xad, 0x1d, 0xe8, 0xda,
+	0xd1, 0xfe, 0x2e, 0x66, 0x9a, 0x56, 0x36, 0x4d, 0x9d, 0x91, 0xf9, 0x19, 0x7c, 0xa2, 0x36, 0xca,
+	0xaa, 0x78, 0x5a, 0xe5, 0x8b, 0xfd, 0x2a, 0xbb, 0xf5, 0x8b, 0x1d, 0x0c, 0x64, 0xb7, 0x8c, 0x21,
+	0xbc, 0xdb, 0x52, 0xc2, 0x33, 0x9e, 0x86, 0x87, 0x23, 0x77, 0xd2, 0xc7, 0xcd, 0x11, 0x5d, 0xc0,
+	0x09, 0x7d, 0x5e, 0xb3, 0x2a, 0xa1, 0xcb, 0x87, 0x42, 0x6c, 0x97, 0x19, 0x4f, 0xe8, 0x33, 0x2d,
+	0xc3, 0x63, 0xb5, 0x2d, 0x18, 0xd9, 0xdc, 0x4f, 0x85, 0xd8, 0xce, 0x4d, 0xe6, 0x26, 0x00, 0xd0,
+	0x76, 0xcc, 0xd2, 0xff, 0xed, 0xc2, 0xc0, 0x98, 0x47, 0x53, 0xe8, 0x6e, 0x68, 0x6d, 0x5f, 0xf5,
+	0xdb, 0x37, 0x86, 0x15, 0x89, 0x16, 0xfa, 0x9f, 0x25, 0xa7, 0x85, 0xcc, 0x68, 0x19, 0x76, 0xf5,
+	0x33, 0xb9, 0xf8, 0xaf, 0x19, 0x45, 0x8b, 0xb6, 0x64, 0xc6, 0x65, 0x51, 0xe3, 0x17, 0x1a, 0xc3,
+	0xdf, 0xe1, 0xe8, 0x55, 0x1a, 0x7d, 0xd8, 0xb9, 0xf2, 0xcc, 0xcf, 0x7e, 0x0d, 0xfd, 0xdd, 0xaa,
+	0xff, 0x8f, 0x87, 0x69, 0xe8, 0xef, 0x3a, 0xdf, 0xba, 0xab, 0x81, 0x5e, 0xe0, 0xab, 0x7f, 0x02,
+	0x00, 0x00, 0xff, 0xff, 0xbd, 0x19, 0xdc, 0x16, 0xff, 0x05, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/datastore/internal/proto/query.pb.go b/go/src/google.golang.org/cloud/datastore/internal/proto/query.pb.go
index d34b243..a58d457 100644
--- a/go/src/google.golang.org/cloud/datastore/internal/proto/query.pb.go
+++ b/go/src/google.golang.org/cloud/datastore/internal/proto/query.pb.go
@@ -220,13 +220,13 @@
 	// The properties to make distinct. The query results will contain the first
 	// result for each distinct combination of values for the given properties
 	// (if empty, all results are returned).
-	DistinctOn []*PropertyReference `protobuf:"bytes,6,rep,name=distinct_on" json:"distinct_on,omitempty"`
+	DistinctOn []*PropertyReference `protobuf:"bytes,6,rep,name=distinct_on,json=distinctOn" json:"distinct_on,omitempty"`
 	// A starting point for the query results. Query cursors are
 	// returned in query result batches.
-	StartCursor []byte `protobuf:"bytes,7,opt,name=start_cursor,proto3" json:"start_cursor,omitempty"`
+	StartCursor []byte `protobuf:"bytes,7,opt,name=start_cursor,json=startCursor,proto3" json:"start_cursor,omitempty"`
 	// An ending point for the query results. Query cursors are
 	// returned in query result batches.
-	EndCursor []byte `protobuf:"bytes,8,opt,name=end_cursor,proto3" json:"end_cursor,omitempty"`
+	EndCursor []byte `protobuf:"bytes,8,opt,name=end_cursor,json=endCursor,proto3" json:"end_cursor,omitempty"`
 	// The number of results to skip. Applies before limit, but after all other
 	// constraints. Optional. Must be >= 0 if specified.
 	Offset int32 `protobuf:"varint,10,opt,name=offset" json:"offset,omitempty"`
@@ -365,10 +365,10 @@
 }
 
 type Filter_CompositeFilter struct {
-	CompositeFilter *CompositeFilter `protobuf:"bytes,1,opt,name=composite_filter,oneof"`
+	CompositeFilter *CompositeFilter `protobuf:"bytes,1,opt,name=composite_filter,json=compositeFilter,oneof"`
 }
 type Filter_PropertyFilter struct {
-	PropertyFilter *PropertyFilter `protobuf:"bytes,2,opt,name=property_filter,oneof"`
+	PropertyFilter *PropertyFilter `protobuf:"bytes,2,opt,name=property_filter,json=propertyFilter,oneof"`
 }
 
 func (*Filter_CompositeFilter) isFilter_FilterType() {}
@@ -523,24 +523,24 @@
 type GqlQuery struct {
 	// A string of the format described
 	// [here](https://cloud.google.com/datastore/docs/apis/gql/gql_reference).
-	QueryString string `protobuf:"bytes,1,opt,name=query_string" json:"query_string,omitempty"`
+	QueryString string `protobuf:"bytes,1,opt,name=query_string,json=queryString" json:"query_string,omitempty"`
 	// When false, the query string must not contain any literals and instead
 	// must bind all values. For example,
 	// `SELECT * FROM Kind WHERE a = 'string literal'` is not allowed, while
 	// `SELECT * FROM Kind WHERE a = @value` is.
-	AllowLiterals bool `protobuf:"varint,2,opt,name=allow_literals" json:"allow_literals,omitempty"`
+	AllowLiterals bool `protobuf:"varint,2,opt,name=allow_literals,json=allowLiterals" json:"allow_literals,omitempty"`
 	// For each non-reserved named binding site in the query string,
 	// there must be a named parameter with that name,
 	// but not necessarily the inverse.
 	// Key must match regex `[A-Za-z_$][A-Za-z_$0-9]*`, must not match regex
 	// `__.*__`, and must not be `""`.
-	NamedBindings map[string]*GqlQueryParameter `protobuf:"bytes,5,rep,name=named_bindings" json:"named_bindings,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
+	NamedBindings map[string]*GqlQueryParameter `protobuf:"bytes,5,rep,name=named_bindings,json=namedBindings" json:"named_bindings,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
 	// Numbered binding site @1 references the first numbered parameter,
 	// effectively using 1-based indexing, rather than the usual 0.
 	// For each binding site numbered i in `query_string`,
 	// there must be an i-th numbered parameter.
 	// The inverse must also be true.
-	PositionalBindings []*GqlQueryParameter `protobuf:"bytes,4,rep,name=positional_bindings" json:"positional_bindings,omitempty"`
+	PositionalBindings []*GqlQueryParameter `protobuf:"bytes,4,rep,name=positional_bindings,json=positionalBindings" json:"positional_bindings,omitempty"`
 }
 
 func (m *GqlQuery) Reset()                    { *m = GqlQuery{} }
@@ -685,18 +685,18 @@
 // A batch of results produced by a query.
 type QueryResultBatch struct {
 	// The number of results skipped, typically because of an offset.
-	SkippedResults int32 `protobuf:"varint,6,opt,name=skipped_results" json:"skipped_results,omitempty"`
+	SkippedResults int32 `protobuf:"varint,6,opt,name=skipped_results,json=skippedResults" json:"skipped_results,omitempty"`
 	// A cursor that points to the position after the last skipped result.
 	// Will be set when `skipped_results` != 0.
-	SkippedCursor []byte `protobuf:"bytes,3,opt,name=skipped_cursor,proto3" json:"skipped_cursor,omitempty"`
+	SkippedCursor []byte `protobuf:"bytes,3,opt,name=skipped_cursor,json=skippedCursor,proto3" json:"skipped_cursor,omitempty"`
 	// The result type for every entity in `entity_results`.
-	EntityResultType EntityResult_ResultType `protobuf:"varint,1,opt,name=entity_result_type,enum=google.datastore.v1beta3.EntityResult_ResultType" json:"entity_result_type,omitempty"`
+	EntityResultType EntityResult_ResultType `protobuf:"varint,1,opt,name=entity_result_type,json=entityResultType,enum=google.datastore.v1beta3.EntityResult_ResultType" json:"entity_result_type,omitempty"`
 	// The results for this batch.
-	EntityResults []*EntityResult `protobuf:"bytes,2,rep,name=entity_results" json:"entity_results,omitempty"`
+	EntityResults []*EntityResult `protobuf:"bytes,2,rep,name=entity_results,json=entityResults" json:"entity_results,omitempty"`
 	// A cursor that points to the position after the last result in the batch.
-	EndCursor []byte `protobuf:"bytes,4,opt,name=end_cursor,proto3" json:"end_cursor,omitempty"`
+	EndCursor []byte `protobuf:"bytes,4,opt,name=end_cursor,json=endCursor,proto3" json:"end_cursor,omitempty"`
 	// The state of the query after the current batch.
-	MoreResults QueryResultBatch_MoreResultsType `protobuf:"varint,5,opt,name=more_results,enum=google.datastore.v1beta3.QueryResultBatch_MoreResultsType" json:"more_results,omitempty"`
+	MoreResults QueryResultBatch_MoreResultsType `protobuf:"varint,5,opt,name=more_results,json=moreResults,enum=google.datastore.v1beta3.QueryResultBatch_MoreResultsType" json:"more_results,omitempty"`
 }
 
 func (m *QueryResultBatch) Reset()                    { *m = QueryResultBatch{} }
@@ -732,73 +732,81 @@
 }
 
 var fileDescriptor2 = []byte{
-	// 1085 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x56, 0x5f, 0x53, 0xe3, 0x54,
-	0x14, 0xdf, 0xf4, 0x0f, 0x94, 0xd3, 0x52, 0xb2, 0x77, 0x57, 0xb7, 0xcb, 0xba, 0x2b, 0x9b, 0x71,
-	0x14, 0x75, 0x4c, 0xa5, 0x8c, 0xce, 0x0e, 0x23, 0x3b, 0x96, 0x36, 0x40, 0xa5, 0x24, 0x25, 0x29,
-	0xce, 0xf0, 0x94, 0x09, 0xed, 0xa5, 0x46, 0x42, 0xd2, 0x4d, 0x6e, 0x77, 0xe5, 0x33, 0xf8, 0xec,
-	0x8c, 0xef, 0x7e, 0x04, 0xfd, 0x26, 0xce, 0xf8, 0xe2, 0x97, 0xf1, 0xe4, 0xde, 0xa4, 0xd0, 0x42,
-	0x2d, 0xcc, 0xbe, 0x25, 0xf7, 0x9e, 0xdf, 0xef, 0xfc, 0x3f, 0xe7, 0xc2, 0x77, 0x83, 0x20, 0x18,
-	0x78, 0x54, 0x1d, 0x04, 0x9e, 0xe3, 0x0f, 0xd4, 0x20, 0x1c, 0x54, 0x7b, 0x5e, 0x30, 0xea, 0x57,
-	0xfb, 0x0e, 0x73, 0x22, 0x16, 0x84, 0xb4, 0xea, 0xfa, 0x8c, 0x86, 0xbe, 0xe3, 0x55, 0x87, 0x61,
-	0xc0, 0x82, 0xea, 0x9b, 0x11, 0x0d, 0x2f, 0x55, 0xfe, 0x4d, 0x2a, 0x09, 0x7a, 0x2c, 0xac, 0xbe,
-	0xdd, 0x38, 0xa5, 0xcc, 0xd9, 0x5c, 0xdd, 0xbe, 0x37, 0x2f, 0xf5, 0x99, 0xcb, 0x12, 0xe2, 0xd5,
-	0x17, 0x02, 0x2e, 0xae, 0x4e, 0x47, 0x67, 0xd5, 0x77, 0xa1, 0x33, 0x1c, 0xd2, 0x30, 0x4a, 0xee,
-	0x77, 0xee, 0x41, 0xcf, 0x2e, 0x87, 0xd4, 0x16, 0x3a, 0x3c, 0x87, 0x79, 0x28, 0xce, 0x7f, 0x94,
-	0x3f, 0x24, 0x28, 0x69, 0x5c, 0xa9, 0x49, 0xa3, 0x91, 0xc7, 0xc8, 0xd7, 0xb0, 0x20, 0x8c, 0xa8,
-	0x48, 0x6b, 0xd2, 0x7a, 0xb1, 0xb6, 0xa6, 0xce, 0x72, 0x4f, 0x15, 0x38, 0x52, 0x86, 0x85, 0xde,
-	0x28, 0x8c, 0x82, 0xb0, 0x92, 0x45, 0x44, 0x49, 0x39, 0x02, 0x10, 0x5c, 0x5d, 0xd4, 0x49, 0x9e,
-	0xc1, 0x13, 0x53, 0xb3, 0x8e, 0xdb, 0x5d, 0xbb, 0x7b, 0xd2, 0xd1, 0xec, 0x63, 0xdd, 0xea, 0x68,
-	0x8d, 0xd6, 0x6e, 0x4b, 0x6b, 0xca, 0x0f, 0x48, 0x01, 0x72, 0xbb, 0xc7, 0xed, 0xb6, 0x2c, 0x21,
-	0x09, 0x74, 0x4c, 0xe3, 0x07, 0xad, 0xd1, 0x6d, 0x19, 0xba, 0x9c, 0x21, 0x25, 0x28, 0x1c, 0x68,
-	0x27, 0xb6, 0xa1, 0xb7, 0x4f, 0xe4, 0xac, 0xf2, 0x6b, 0x16, 0xf2, 0x47, 0x71, 0xc8, 0xc9, 0x2b,
-	0x00, 0x34, 0xfc, 0x67, 0xda, 0x63, 0x6e, 0xe0, 0x57, 0x32, 0x6b, 0x59, 0x34, 0xf1, 0x93, 0xd9,
-	0x26, 0x76, 0xc6, 0xb2, 0xe4, 0x5b, 0xc8, 0x9d, 0xbb, 0x7e, 0x1f, 0x8d, 0x8c, 0x31, 0xeb, 0xb3,
-	0x31, 0x07, 0x28, 0xa5, 0xfd, 0x32, 0x0c, 0x69, 0x14, 0xc5, 0x38, 0x0c, 0xc8, 0x99, 0xeb, 0x61,
-	0x18, 0x2b, 0xb9, 0x79, 0x01, 0xd9, 0xe5, 0x72, 0xa8, 0x29, 0x1f, 0x84, 0x7d, 0x04, 0xe4, 0xb9,
-	0xaa, 0xcf, 0xfe, 0xd7, 0x3c, 0xcc, 0x27, 0xbb, 0x34, 0x62, 0x71, 0xf2, 0x3d, 0x14, 0xfb, 0x6e,
-	0xc4, 0x5c, 0xbf, 0xc7, 0x6c, 0x74, 0x6e, 0x81, 0xa3, 0xbf, 0x9c, 0x8f, 0x36, 0xe9, 0x19, 0x0d,
-	0xa9, 0xdf, 0xa3, 0xe4, 0x31, 0x94, 0x22, 0xe6, 0x84, 0xcc, 0x4e, 0x12, 0xb2, 0x18, 0x27, 0x84,
-	0x10, 0x00, 0xea, 0xf7, 0xd3, 0xb3, 0x02, 0x3f, 0xc3, 0xa4, 0x05, 0x67, 0x67, 0x11, 0x65, 0x15,
-	0xc0, 0xff, 0x3c, 0xf9, 0x02, 0xf2, 0x9e, 0x7b, 0xe1, 0xb2, 0x4a, 0x89, 0x3b, 0xf9, 0x2c, 0xd5,
-	0x9a, 0xd6, 0x9e, 0xda, 0xf2, 0xd9, 0x66, 0xed, 0x47, 0xc7, 0x1b, 0x51, 0xe5, 0x05, 0x94, 0xa7,
-	0x62, 0x54, 0x82, 0x9c, 0xef, 0x5c, 0x50, 0x5e, 0x32, 0x4b, 0xca, 0x4b, 0x78, 0x78, 0xd3, 0xb4,
-	0x54, 0x24, 0xc3, 0x45, 0x0e, 0x30, 0xdd, 0x57, 0xa9, 0xd9, 0x86, 0xc2, 0x30, 0x01, 0x24, 0x55,
-	0x77, 0x1f, 0xaf, 0x95, 0x7f, 0x25, 0x58, 0x9e, 0x8c, 0xe4, 0xfb, 0x11, 0x92, 0x26, 0x2c, 0xf5,
-	0xdd, 0x70, 0x5c, 0x63, 0xd2, 0x7a, 0xb9, 0xb6, 0x71, 0xc7, 0x24, 0xaa, 0xcd, 0x14, 0xa8, 0x68,
-	0xb0, 0x34, 0xfe, 0x21, 0x4f, 0xe1, 0x83, 0x66, 0xcb, 0x14, 0xe5, 0x3d, 0xd5, 0x04, 0xcb, 0xb0,
-	0x54, 0xb7, 0x1a, 0x9a, 0xde, 0x6c, 0xe9, 0x7b, 0xa2, 0x13, 0x9a, 0xda, 0xf8, 0x3f, 0x13, 0x77,
-	0xe8, 0x42, 0x52, 0x58, 0x1a, 0xc8, 0xbd, 0xe0, 0x62, 0x18, 0x44, 0x2e, 0xa3, 0x76, 0x52, 0x94,
-	0xc2, 0xbd, 0xcf, 0x67, 0x9b, 0xd7, 0x48, 0x11, 0x82, 0x64, 0xff, 0x01, 0x69, 0xc0, 0x4a, 0x1a,
-	0x9d, 0x94, 0x25, 0xc3, 0x59, 0xd6, 0xe7, 0x3b, 0x99, 0x92, 0xec, 0x2c, 0x43, 0x51, 0x60, 0xed,
-	0x78, 0xb4, 0x28, 0x7f, 0x4a, 0xb0, 0x32, 0xa5, 0x89, 0xbc, 0x86, 0x4c, 0x30, 0xe4, 0x06, 0x96,
-	0x6b, 0xb5, 0x3b, 0x1b, 0xa8, 0x1a, 0xa8, 0xc8, 0x41, 0x09, 0xb2, 0x01, 0x8b, 0x42, 0x45, 0x94,
-	0x34, 0xfa, 0xdc, 0xd6, 0x53, 0xbe, 0x82, 0xc2, 0x18, 0x5e, 0x81, 0xc7, 0x46, 0x47, 0x33, 0xeb,
-	0x5d, 0xc3, 0x9c, 0x8a, 0xf8, 0x22, 0x64, 0xeb, 0x7a, 0x53, 0x96, 0x94, 0x7f, 0x32, 0x50, 0x9e,
-	0xf4, 0xec, 0x7d, 0x4b, 0x67, 0x9b, 0xfb, 0x7c, 0xe7, 0x9a, 0x99, 0x76, 0x59, 0x85, 0xfc, 0xdb,
-	0xb8, 0xc7, 0xf8, 0x28, 0x2d, 0xd6, 0x3e, 0x9e, 0xcd, 0x20, 0x5a, 0xf1, 0x37, 0xe9, 0x4e, 0x0e,
-	0x63, 0x89, 0xb5, 0x35, 0xcb, 0xb2, 0xbb, 0xfb, 0x75, 0x1d, 0x4b, 0xec, 0x43, 0x20, 0xe3, 0x5f,
-	0x1b, 0x85, 0xb5, 0xa3, 0xe3, 0x7a, 0x1b, 0x87, 0xae, 0x0c, 0xa5, 0x3d, 0x53, 0xab, 0x77, 0x35,
-	0x53, 0x48, 0x66, 0xe3, 0xb2, 0xbd, 0x7e, 0x72, 0x25, 0x9c, 0x23, 0x4b, 0x90, 0x17, 0x9f, 0xf9,
-	0x18, 0xb7, 0x5f, 0xb7, 0xec, 0xba, 0xde, 0xd0, 0x2c, 0x54, 0x2e, 0x17, 0x95, 0xbf, 0x32, 0x50,
-	0xd8, 0x7b, 0xe3, 0x89, 0x99, 0x8d, 0x53, 0x89, 0xef, 0x4b, 0x3b, 0x62, 0xa1, 0xeb, 0x0f, 0xc4,
-	0x94, 0x40, 0x23, 0xca, 0x8e, 0xe7, 0x05, 0xef, 0x6c, 0x0f, 0x53, 0x1f, 0x3a, 0x5e, 0xc4, 0xa3,
-	0x56, 0x20, 0x87, 0x50, 0x8e, 0x07, 0x45, 0xdf, 0x3e, 0xc5, 0x19, 0x83, 0xe2, 0x51, 0x32, 0x46,
-	0xbf, 0x99, 0x1d, 0x8b, 0x54, 0x93, 0xaa, 0xc7, 0xc0, 0x9d, 0x04, 0x87, 0xeb, 0x09, 0x95, 0xef,
-	0xc3, 0x23, 0x5e, 0x5d, 0xd8, 0x84, 0x8e, 0x77, 0xc5, 0x99, 0x9b, 0x37, 0x5c, 0x53, 0xce, 0x8e,
-	0x13, 0x22, 0x29, 0x5a, 0xb8, 0xea, 0x00, 0xb9, 0x85, 0xbf, 0x08, 0xd9, 0x73, 0x7a, 0x99, 0xf8,
-	0xb4, 0x95, 0xa6, 0x2f, 0x33, 0xaf, 0x72, 0x6e, 0xd0, 0x6f, 0x65, 0x5e, 0x49, 0x8a, 0x0b, 0x0f,
-	0x6f, 0x5c, 0xe0, 0x02, 0x9a, 0x20, 0x9d, 0x57, 0x13, 0xd8, 0xe0, 0xf2, 0xe4, 0x46, 0xc6, 0x6e,
-	0x95, 0xa1, 0x3c, 0x4c, 0x09, 0x45, 0xc3, 0xfe, 0x9d, 0x05, 0x99, 0x2b, 0x12, 0xbb, 0x7a, 0xc7,
-	0x61, 0xbd, 0x9f, 0xc8, 0x13, 0x58, 0x89, 0xce, 0x5d, 0x7c, 0x63, 0xf4, 0xed, 0x90, 0x1f, 0x47,
-	0xb8, 0x85, 0xe2, 0xf5, 0x80, 0xc9, 0x4a, 0x2f, 0xae, 0x33, 0x63, 0xb2, 0x88, 0x78, 0x2d, 0x24,
-	0xf2, 0x9c, 0x3b, 0x69, 0xf9, 0x8d, 0x79, 0x2f, 0x07, 0xa1, 0x59, 0xbd, 0xf6, 0x58, 0x78, 0x0d,
-	0xe5, 0x09, 0xba, 0xb4, 0xf1, 0x3f, 0xbd, 0x1b, 0xd5, 0xd4, 0xa6, 0xcb, 0x71, 0x13, 0x3b, 0x50,
-	0xba, 0x40, 0xc0, 0x98, 0x31, 0xcf, 0x8d, 0xdb, 0x9a, 0xcd, 0x38, 0x1d, 0x15, 0xf5, 0x10, 0x6f,
-	0xc5, 0x7f, 0x14, 0x5b, 0xa9, 0xfc, 0x8e, 0xb3, 0x6e, 0xea, 0x8c, 0xbc, 0x84, 0xe7, 0x87, 0x86,
-	0xa9, 0xd9, 0xe2, 0xad, 0x63, 0xdd, 0xf6, 0xd8, 0xc1, 0x2e, 0xd1, 0x8d, 0xae, 0xbd, 0xdb, 0xd2,
-	0x5b, 0xd6, 0x3e, 0x9e, 0x48, 0xe4, 0x23, 0xa8, 0x4c, 0x80, 0xea, 0xbb, 0x71, 0xa3, 0xb5, 0x5b,
-	0x87, 0xad, 0x2e, 0x76, 0xe3, 0x73, 0x78, 0x7a, 0xcb, 0x6d, 0xe3, 0xd8, 0xb4, 0xb0, 0xc5, 0x72,
-	0xe4, 0x11, 0xac, 0xe8, 0x86, 0x7d, 0x5d, 0x42, 0xce, 0x9e, 0x2e, 0xf0, 0x7d, 0xbd, 0xf9, 0x5f,
-	0x00, 0x00, 0x00, 0xff, 0xff, 0x54, 0x69, 0x9f, 0x5c, 0xd2, 0x0a, 0x00, 0x00,
+	// 1214 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x56, 0x5d, 0x6e, 0xdb, 0x46,
+	0x10, 0x36, 0xa9, 0x9f, 0x48, 0xa3, 0x1f, 0x33, 0x9b, 0x36, 0x65, 0x92, 0xa6, 0x75, 0x88, 0xb4,
+	0x51, 0x51, 0x54, 0x86, 0x15, 0x04, 0x0d, 0x82, 0xe4, 0x41, 0x96, 0x68, 0x5b, 0x8d, 0x2c, 0x3a,
+	0x4b, 0x39, 0x40, 0x80, 0x14, 0x04, 0x2d, 0xae, 0x55, 0x36, 0x14, 0xc9, 0x2c, 0xd7, 0x49, 0x7c,
+	0x90, 0x02, 0xbd, 0x43, 0x1f, 0x7b, 0x81, 0x3e, 0xf4, 0x1e, 0x3d, 0x40, 0x5f, 0x7a, 0x83, 0x16,
+	0xdc, 0x5d, 0xea, 0xcf, 0x51, 0xe5, 0x00, 0x79, 0x23, 0x67, 0xbf, 0xef, 0x9b, 0x9d, 0xd9, 0xd9,
+	0xd9, 0x81, 0xc7, 0xe3, 0x28, 0x1a, 0x07, 0xa4, 0x39, 0x8e, 0x02, 0x37, 0x1c, 0x37, 0x23, 0x3a,
+	0xde, 0x1e, 0x05, 0xd1, 0x99, 0xb7, 0xed, 0xb9, 0xcc, 0x4d, 0x58, 0x44, 0xc9, 0xb6, 0x1f, 0x32,
+	0x42, 0x43, 0x37, 0xd8, 0x8e, 0x69, 0xc4, 0xa2, 0xed, 0xd7, 0x67, 0x84, 0x9e, 0x37, 0xf9, 0x37,
+	0xd2, 0x25, 0x7b, 0x0a, 0x6e, 0xbe, 0xd9, 0x39, 0x21, 0xcc, 0xbd, 0x7f, 0xf3, 0xc9, 0x07, 0xeb,
+	0x92, 0x90, 0xf9, 0x4c, 0x0a, 0xdf, 0xfc, 0x42, 0xd0, 0xc5, 0xd2, 0xc9, 0xd9, 0xe9, 0xf6, 0x5b,
+	0xea, 0xc6, 0x31, 0xa1, 0x89, 0x5c, 0xdf, 0xfd, 0x00, 0x79, 0x76, 0x1e, 0x13, 0x47, 0xf8, 0x08,
+	0x5c, 0x16, 0x84, 0x63, 0xa1, 0x61, 0xfc, 0xae, 0x40, 0xd5, 0xe4, 0x4e, 0x31, 0x49, 0xce, 0x02,
+	0x86, 0x1e, 0x42, 0x51, 0x6c, 0x42, 0x57, 0xb6, 0x94, 0x46, 0xa5, 0xb5, 0xd5, 0x5c, 0x15, 0x5e,
+	0x53, 0xf2, 0x24, 0x1e, 0x5d, 0x87, 0xe2, 0xe8, 0x8c, 0x26, 0x11, 0xd5, 0x73, 0x5b, 0x4a, 0xa3,
+	0x8a, 0xe5, 0x9f, 0xf1, 0x0c, 0x40, 0x68, 0x0f, 0xcf, 0x63, 0x82, 0x6e, 0xc1, 0x67, 0xd8, 0xb4,
+	0x8f, 0xfb, 0x43, 0x67, 0xf8, 0xe2, 0xc8, 0x74, 0x8e, 0x07, 0xf6, 0x91, 0xd9, 0xe9, 0xed, 0xf5,
+	0xcc, 0xae, 0xb6, 0x81, 0x4a, 0x90, 0xdf, 0x3b, 0xee, 0xf7, 0x35, 0x05, 0xd5, 0x01, 0x8e, 0xb0,
+	0xf5, 0x83, 0xd9, 0x19, 0xf6, 0xac, 0x81, 0xa6, 0xa2, 0x2a, 0x94, 0x9e, 0x9a, 0x2f, 0x1c, 0x6b,
+	0xd0, 0x7f, 0xa1, 0xe5, 0x8c, 0xbf, 0x72, 0x50, 0x78, 0x96, 0x1e, 0x01, 0xea, 0x02, 0xc4, 0x34,
+	0xfa, 0x99, 0x8c, 0x98, 0x1f, 0x85, 0xba, 0xba, 0x95, 0x6b, 0x54, 0x5a, 0x77, 0x57, 0x6f, 0xf9,
+	0x68, 0x8a, 0xc5, 0x73, 0x3c, 0xf4, 0x18, 0xf2, 0xaf, 0xfc, 0xd0, 0xd3, 0x73, 0x9c, 0xdf, 0x58,
+	0xcd, 0x7f, 0xea, 0x87, 0x9e, 0xf9, 0x2e, 0xa6, 0x24, 0x49, 0x52, 0x0d, 0xce, 0x4a, 0x53, 0x76,
+	0xea, 0x07, 0x8c, 0x50, 0x3d, 0xbf, 0x2e, 0x65, 0x7b, 0x1c, 0x87, 0x25, 0x1e, 0x3d, 0x81, 0x42,
+	0x44, 0x3d, 0x42, 0xf5, 0x02, 0x77, 0x7c, 0xef, 0x7f, 0x37, 0x1e, 0x13, 0xca, 0xce, 0xad, 0x14,
+	0x8e, 0x05, 0x0b, 0xf5, 0xa1, 0xe2, 0xf9, 0x09, 0xf3, 0xc3, 0x11, 0x73, 0xa2, 0x50, 0x2f, 0x72,
+	0x91, 0x6f, 0xd7, 0x8b, 0x60, 0x72, 0x4a, 0x28, 0x09, 0x47, 0x04, 0x43, 0xc6, 0xb7, 0x42, 0x74,
+	0x07, 0xaa, 0x09, 0x73, 0x29, 0x73, 0xe4, 0x29, 0x5e, 0xe1, 0xa7, 0x58, 0xe1, 0xb6, 0x0e, 0x37,
+	0xa1, 0xdb, 0x00, 0x24, 0xf4, 0x32, 0x40, 0x89, 0x03, 0xca, 0x24, 0xf4, 0xe4, 0xf2, 0x75, 0x28,
+	0x46, 0xa7, 0xa7, 0x09, 0x61, 0x3a, 0x6c, 0x29, 0x8d, 0x02, 0x96, 0x7f, 0x68, 0x07, 0x0a, 0x81,
+	0x3f, 0xf1, 0x99, 0x5e, 0xe5, 0xf9, 0xb9, 0x95, 0xed, 0x30, 0x2b, 0xec, 0x66, 0x2f, 0x64, 0xf7,
+	0x5b, 0xcf, 0xdd, 0xe0, 0x8c, 0x60, 0x81, 0x34, 0xee, 0x42, 0x7d, 0x31, 0xd7, 0x08, 0x41, 0x3e,
+	0x74, 0x27, 0x84, 0x97, 0x65, 0x19, 0xf3, 0x6f, 0xe3, 0x1e, 0x5c, 0xbd, 0x10, 0xd3, 0x14, 0xa8,
+	0xce, 0x01, 0x8f, 0x01, 0x66, 0x47, 0x8f, 0xf6, 0xa1, 0x14, 0x4b, 0x9a, 0xac, 0xf2, 0x0f, 0x4a,
+	0xda, 0x94, 0x6c, 0xfc, 0xa3, 0x40, 0x6d, 0xe1, 0x64, 0x3e, 0x9a, 0x34, 0xb2, 0xa0, 0xec, 0xf9,
+	0x74, 0x5a, 0xd7, 0x4a, 0xa3, 0xde, 0xda, 0xb9, 0x64, 0x79, 0x34, 0xbb, 0x19, 0x11, 0xcf, 0x34,
+	0x0c, 0x13, 0xca, 0x53, 0x3b, 0xba, 0x01, 0x9f, 0x76, 0x7b, 0x58, 0xdc, 0xae, 0xa5, 0x3b, 0x58,
+	0x83, 0x72, 0xdb, 0xee, 0x98, 0x83, 0x6e, 0x6f, 0xb0, 0x2f, 0x2e, 0x62, 0xd7, 0x9c, 0xfe, 0xab,
+	0xc6, 0x9f, 0x0a, 0x14, 0x45, 0x15, 0xa3, 0xe7, 0xa0, 0x8d, 0xa2, 0x49, 0x1c, 0x25, 0x3e, 0x23,
+	0x8e, 0xbc, 0x01, 0x22, 0xe6, 0x6f, 0x56, 0xef, 0xb4, 0x93, 0x31, 0x84, 0xc8, 0xc1, 0x06, 0xde,
+	0x1c, 0x2d, 0x9a, 0x90, 0x0d, 0x9b, 0x59, 0x1a, 0x32, 0x59, 0x95, 0xcb, 0x36, 0xd6, 0x27, 0x60,
+	0xaa, 0x5a, 0x8f, 0x17, 0x2c, 0xbb, 0x35, 0xa8, 0x08, 0x2d, 0x27, 0x6d, 0x85, 0xc6, 0x1f, 0x0a,
+	0x6c, 0x2e, 0x6d, 0x05, 0xed, 0x82, 0x1a, 0xc5, 0x3c, 0x82, 0x7a, 0xab, 0x75, 0xe9, 0x08, 0x9a,
+	0x56, 0x4c, 0xa8, 0xcb, 0x22, 0x8a, 0xd5, 0x28, 0x46, 0x8f, 0xe0, 0x8a, 0x70, 0x93, 0xc8, 0x66,
+	0xb4, 0xbe, 0x19, 0x64, 0x04, 0xe3, 0x3b, 0x28, 0x65, 0x5a, 0x48, 0x87, 0x4f, 0xac, 0x23, 0x13,
+	0xb7, 0x87, 0x16, 0x5e, 0x3a, 0x9f, 0x2b, 0x90, 0x6b, 0x0f, 0xba, 0x9a, 0x62, 0xfc, 0xad, 0x42,
+	0x7d, 0x31, 0xec, 0x8f, 0x57, 0x7d, 0x6d, 0x9e, 0x8a, 0x4b, 0x97, 0xdd, 0xfb, 0x32, 0xf1, 0x00,
+	0x0a, 0x6f, 0xd2, 0x1b, 0xcd, 0x5f, 0x83, 0x4a, 0xeb, 0xcb, 0xd5, 0x2a, 0xf2, 0xe2, 0x73, 0xb4,
+	0xf1, 0x8b, 0x72, 0xa9, 0x2c, 0xd4, 0xa0, 0xdc, 0x37, 0x6d, 0xdb, 0x19, 0x1e, 0xb4, 0x07, 0x9a,
+	0x82, 0xae, 0x03, 0x9a, 0xfe, 0x3a, 0x16, 0x76, 0xcc, 0x67, 0xc7, 0xed, 0xbe, 0xa6, 0x22, 0x0d,
+	0xaa, 0xfb, 0xd8, 0x6c, 0x0f, 0x4d, 0x2c, 0x90, 0xb9, 0xb4, 0xf2, 0xe7, 0x2d, 0x33, 0x70, 0x1e,
+	0x95, 0xa1, 0x20, 0x3e, 0x0b, 0x29, 0xef, 0xa0, 0x6d, 0x3b, 0xed, 0x41, 0xc7, 0xb4, 0x87, 0x16,
+	0xd6, 0x2a, 0xc6, 0xbf, 0x2a, 0x94, 0xf6, 0x5f, 0x07, 0xe2, 0xd5, 0xb9, 0x03, 0x55, 0x3e, 0x01,
+	0x38, 0x09, 0xa3, 0x7e, 0x38, 0x96, 0x3d, 0xa9, 0xc2, 0x6d, 0x36, 0x37, 0xa1, 0xaf, 0xa0, 0xee,
+	0x06, 0x41, 0xf4, 0xd6, 0x09, 0x7c, 0x46, 0xa8, 0x1b, 0x24, 0x3c, 0x9b, 0x25, 0x5c, 0xe3, 0xd6,
+	0xbe, 0x34, 0xa2, 0x97, 0x50, 0x4f, 0x1b, 0x94, 0xe7, 0x9c, 0xf8, 0xa1, 0xe7, 0x87, 0xe3, 0x44,
+	0x3e, 0x05, 0x0f, 0x56, 0xa7, 0x2b, 0xdb, 0x45, 0x73, 0x90, 0x12, 0x77, 0x25, 0xcf, 0x0c, 0x19,
+	0x3d, 0xc7, 0xb5, 0x70, 0xde, 0x86, 0x5e, 0xc2, 0x35, 0x5e, 0xaa, 0x7e, 0x14, 0xba, 0xc1, 0xcc,
+	0x45, 0x7e, 0xdd, 0x43, 0x91, 0xb9, 0x38, 0x72, 0xa9, 0x3b, 0x21, 0x69, 0x91, 0xa2, 0x99, 0x4e,
+	0xa6, 0x7e, 0x73, 0x02, 0xe8, 0xe2, 0x16, 0x90, 0x06, 0xb9, 0x57, 0xe4, 0x5c, 0xa6, 0x24, 0xfd,
+	0x44, 0xed, 0xac, 0x12, 0xd4, 0x75, 0x25, 0x79, 0xd1, 0xaf, 0x60, 0x3e, 0x52, 0x1f, 0x2a, 0xc6,
+	0x3b, 0xb8, 0x7a, 0x61, 0x1d, 0x7d, 0xbf, 0xa8, 0xbd, 0xae, 0xca, 0x0e, 0x36, 0xa4, 0x22, 0xd2,
+	0x17, 0xa7, 0x95, 0x83, 0x8d, 0x6c, 0x5e, 0xd9, 0xd5, 0xa0, 0x1e, 0x67, 0xfa, 0xa2, 0x59, 0xfc,
+	0x96, 0x07, 0x8d, 0xfb, 0x15, 0x73, 0xcc, 0xae, 0xcb, 0x46, 0x3f, 0xa1, 0x7b, 0xb0, 0x99, 0xbc,
+	0xf2, 0xe3, 0x98, 0x78, 0x0e, 0xe5, 0xe6, 0x44, 0x2f, 0xf2, 0x57, 0xaf, 0x2e, 0xcd, 0x02, 0x9c,
+	0xa4, 0x95, 0x90, 0x01, 0x17, 0xe6, 0xa3, 0x9a, 0xb4, 0xca, 0xc7, 0xd3, 0x01, 0x24, 0x06, 0x29,
+	0x29, 0xc7, 0x5d, 0xcb, 0x6e, 0xb4, 0xb3, 0x76, 0x08, 0xe3, 0x94, 0xe6, 0x6c, 0xce, 0xc2, 0x1a,
+	0x99, 0x5b, 0xe0, 0x93, 0xd7, 0x21, 0xd4, 0x17, 0x1c, 0x64, 0x1d, 0xea, 0xeb, 0xcb, 0x89, 0xe3,
+	0xda, 0xbc, 0x62, 0xb2, 0x34, 0x0b, 0xe4, 0x97, 0x67, 0x81, 0x1f, 0xa1, 0x3a, 0x89, 0x28, 0x99,
+	0xfa, 0x2a, 0xf0, 0x40, 0x1e, 0xad, 0xf6, 0xb5, 0x9c, 0xe0, 0xe6, 0x61, 0x44, 0x89, 0x74, 0xc6,
+	0x23, 0xaa, 0x4c, 0x66, 0x06, 0xe3, 0x57, 0x05, 0x36, 0x97, 0x00, 0xe8, 0x0e, 0xdc, 0x3e, 0xb4,
+	0xb0, 0xe9, 0x88, 0xf9, 0xd2, 0x7e, 0xdf, 0x80, 0xa9, 0x41, 0x75, 0x60, 0x0d, 0x9d, 0xbd, 0xde,
+	0xa0, 0x67, 0x1f, 0x98, 0x5d, 0x4d, 0x41, 0x9f, 0x83, 0xbe, 0x40, 0x6a, 0xef, 0xa5, 0xad, 0xa1,
+	0xdf, 0x3b, 0xec, 0x0d, 0x35, 0x15, 0xdd, 0x86, 0x1b, 0xef, 0x59, 0xed, 0x1c, 0x63, 0xdb, 0xc2,
+	0x5a, 0x1e, 0x5d, 0x83, 0xcd, 0x81, 0xe5, 0xcc, 0x23, 0xb4, 0xdc, 0x49, 0x91, 0x8f, 0x35, 0xf7,
+	0xff, 0x0b, 0x00, 0x00, 0xff, 0xff, 0xee, 0x2d, 0x21, 0x6f, 0x56, 0x0c, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/datastore/internal/type_proto/latlng.pb.go b/go/src/google.golang.org/cloud/datastore/internal/type_proto/latlng.pb.go
index c68eabc..b39d264 100644
--- a/go/src/google.golang.org/cloud/datastore/internal/type_proto/latlng.pb.go
+++ b/go/src/google.golang.org/cloud/datastore/internal/type_proto/latlng.pb.go
@@ -82,15 +82,15 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 147 bytes of a gzipped FileDescriptorProto
+	// 153 bytes of a gzipped FileDescriptorProto
 	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x72, 0x4a, 0xcf, 0xcf, 0x4f,
 	0xcf, 0x49, 0xd5, 0x4b, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0xcb, 0x2f, 0x4a, 0xd7, 0x4f, 0xce,
 	0xc9, 0x2f, 0x4d, 0xd1, 0x4f, 0x49, 0x2c, 0x49, 0x2c, 0x2e, 0xc9, 0x2f, 0x4a, 0xd5, 0xcf, 0xcc,
 	0x2b, 0x49, 0x2d, 0xca, 0x4b, 0xcc, 0xd1, 0x2f, 0xa9, 0x2c, 0x48, 0x8d, 0x2f, 0x28, 0xca, 0x2f,
-	0xc9, 0xd7, 0xcf, 0x49, 0x2c, 0xc9, 0x01, 0x2a, 0x07, 0x73, 0x84, 0xb8, 0xa1, 0x66, 0x80, 0xe4,
-	0x95, 0x74, 0xb9, 0xd8, 0x7c, 0x12, 0x4b, 0x7c, 0xf2, 0xd2, 0x85, 0x04, 0xb8, 0x38, 0x80, 0xca,
-	0x32, 0x4b, 0x4a, 0x53, 0x52, 0x25, 0x18, 0x15, 0x18, 0x35, 0x18, 0x85, 0x04, 0xb9, 0x38, 0x73,
-	0xf2, 0xf3, 0xd2, 0x21, 0x42, 0x4c, 0x20, 0x21, 0x27, 0xb6, 0x45, 0x4c, 0xcc, 0xee, 0x21, 0x01,
-	0x49, 0x6c, 0x60, 0xa3, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x4f, 0x57, 0xcc, 0x05, 0x90,
-	0x00, 0x00, 0x00,
+	0xc9, 0xd7, 0xcf, 0x49, 0x2c, 0xc9, 0xc9, 0x4b, 0xd7, 0x03, 0x73, 0x84, 0xb8, 0xa1, 0x66, 0x80,
+	0xe4, 0x95, 0x9c, 0xb8, 0xd8, 0x7c, 0x12, 0x4b, 0x7c, 0xf2, 0xd2, 0x85, 0xa4, 0xb8, 0x38, 0x72,
+	0x12, 0x4b, 0x32, 0x4b, 0x4a, 0x53, 0x52, 0x25, 0x18, 0x15, 0x18, 0x35, 0x18, 0x83, 0xe0, 0x7c,
+	0x21, 0x19, 0x2e, 0xce, 0x9c, 0xfc, 0xbc, 0x74, 0x88, 0x24, 0x13, 0x58, 0x12, 0x21, 0xe0, 0xc4,
+	0xb6, 0x88, 0x89, 0xd9, 0x3d, 0x24, 0x20, 0x89, 0x0d, 0x6c, 0xbe, 0x31, 0x20, 0x00, 0x00, 0xff,
+	0xff, 0xc0, 0x8d, 0x0e, 0xf3, 0xa5, 0x00, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/cloud/datastore/load.go b/go/src/google.golang.org/cloud/datastore/load.go
index e881e04..62157f7 100644
--- a/go/src/google.golang.org/cloud/datastore/load.go
+++ b/go/src/google.golang.org/cloud/datastore/load.go
@@ -25,6 +25,7 @@
 var (
 	typeOfByteSlice = reflect.TypeOf([]byte(nil))
 	typeOfTime      = reflect.TypeOf(time.Time{})
+	typeOfGeoPoint  = reflect.TypeOf(GeoPoint{})
 )
 
 // typeMismatchReason returns a string explaining why the property p could not
@@ -42,6 +43,8 @@
 		entityType = "float"
 	case *Key:
 		entityType = "*datastore.Key"
+	case GeoPoint:
+		entityType = "GeoPoint"
 	case time.Time:
 		entityType = "time.Time"
 	case []byte:
@@ -58,6 +61,20 @@
 }
 
 func (l *propertyLoader) load(codec *structCodec, structValue reflect.Value, p Property, prev map[string]struct{}) string {
+	sl, ok := p.Value.([]interface{})
+	if !ok {
+		return l.loadOneElement(codec, structValue, p, prev)
+	}
+	for _, val := range sl {
+		p.Value = val
+		if errStr := l.loadOneElement(codec, structValue, p, prev); errStr != "" {
+			return errStr
+		}
+	}
+	return ""
+}
+
+func (l *propertyLoader) loadOneElement(codec *structCodec, structValue reflect.Value, p Property, prev map[string]struct{}) string {
 	var sliceOk bool
 	var v reflect.Value
 	// Traverse a struct's struct-typed fields.
@@ -78,6 +95,7 @@
 			break
 		}
 
+		// If the element is a slice, we need to accommodate it.
 		if v.Kind() == reflect.Slice {
 			if l.m == nil {
 				l.m = make(map[string]int)
@@ -102,9 +120,9 @@
 		slice = v
 		v = reflect.New(v.Type().Elem()).Elem()
 	} else if _, ok := prev[p.Name]; ok && !sliceOk {
-		// Zero the field back out that was set previously, turns out its a slice and we don't know what to do with it
+		// Zero the field back out that was set previously, turns out
+		// it's a slice and we don't know what to do with it
 		v.Set(reflect.Zero(v.Type()))
-
 		return "multiple-valued property requires a slice field type"
 	}
 
@@ -159,6 +177,12 @@
 				return typeMismatchReason(p, v)
 			}
 			v.Set(reflect.ValueOf(x))
+		case typeOfGeoPoint:
+			x, ok := pValue.(GeoPoint)
+			if !ok && pValue != nil {
+				return typeMismatchReason(p, v)
+			}
+			v.Set(reflect.ValueOf(x))
 		default:
 			return typeMismatchReason(p, v)
 		}
@@ -216,31 +240,18 @@
 	props := src.Properties
 	out := make([]Property, 0, len(props))
 	for name, val := range props {
-		noIndex := val.ExcludeFromIndexes
-		if arr, ok := val.ValueType.(*pb.Value_ArrayValue); ok {
-			for _, v := range arr.ArrayValue.Values {
-				out = append(out, Property{
-					Name:     name,
-					Value:    propValue(v),
-					NoIndex:  noIndex,
-					Multiple: true,
-				})
-			}
-		} else {
-			out = append(out, Property{
-				Name:     name,
-				Value:    propValue(val),
-				NoIndex:  noIndex,
-				Multiple: false,
-			})
-		}
+		out = append(out, Property{
+			Name:    name,
+			Value:   propToValue(val),
+			NoIndex: val.ExcludeFromIndexes,
+		})
 	}
 	return out
 }
 
-// propValue returns a Go value that represents the PropertyValue. For
+// propToValue returns a Go value that represents the PropertyValue. For
 // example, a TimestampValue becomes a time.Time.
-func propValue(v *pb.Value) interface{} {
+func propToValue(v *pb.Value) interface{} {
 	switch v := v.ValueType.(type) {
 	case *pb.Value_NullValue:
 		return nil
@@ -261,13 +272,16 @@
 	case *pb.Value_BlobValue:
 		return []byte(v.BlobValue)
 	case *pb.Value_GeoPointValue:
-		// TODO(djd): Support GeoPointValue.
-		return nil
+		return GeoPoint{Lat: v.GeoPointValue.Latitude, Lng: v.GeoPointValue.Longitude}
 	case *pb.Value_EntityValue:
 		// TODO(djd): Support EntityValue.
 		return nil
 	case *pb.Value_ArrayValue:
-		panic("propValue should not encounter ArrayValue")
+		arr := make([]interface{}, 0, len(v.ArrayValue.Values))
+		for _, v := range v.ArrayValue.Values {
+			arr = append(arr, propToValue(v))
+		}
+		return arr
 	default:
 		return nil
 	}
diff --git a/go/src/google.golang.org/cloud/datastore/prop.go b/go/src/google.golang.org/cloud/datastore/prop.go
index 41bf097..e267317 100644
--- a/go/src/google.golang.org/cloud/datastore/prop.go
+++ b/go/src/google.golang.org/cloud/datastore/prop.go
@@ -29,9 +29,8 @@
 const maxBlobLen = 1 << 20
 
 // Property is a name/value pair plus some metadata. A datastore entity's
-// contents are loaded and saved as a sequence of Properties. An entity can
-// have multiple Properties with the same name, provided that p.Multiple is
-// true on all of that entity's Properties with that name.
+// contents are loaded and saved as a sequence of Properties. Each property
+// name must be unique within an entity.
 type Property struct {
 	// Name is the property name.
 	Name string
@@ -42,15 +41,16 @@
 	//	- float64
 	//	- *Key
 	//	- time.Time
+	//	- GeoPoint
 	//	- []byte (up to 1 megabyte in length)
+	// Value can also be:
+	//	- []interface{} where each element is one of the above types
 	// This set is smaller than the set of valid struct field types that the
-	// datastore can load and save. A Property Value cannot be a slice (apart
-	// from []byte); use multiple Properties instead. Also, a Value's type
-	// must be explicitly on the list above; it is not sufficient for the
-	// underlying type to be on that list. For example, a Value of "type
-	// myInt64 int64" is invalid. Smaller-width integers and floats are also
-	// invalid. Again, this is more restrictive than the set of valid struct
-	// field types.
+	// datastore can load and save. A Value's type must be explicitly on
+	// the list above; it is not sufficient for the underlying type to be
+	// on that list. For example, a Value of "type myInt64 int64" is
+	// invalid. Smaller-width integers and floats are also invalid. Again,
+	// this is more restrictive than the set of valid struct field types.
 	//
 	// A Value will have an opaque type when loading entities from an index,
 	// such as via a projection query. Load entities into a struct instead
@@ -62,14 +62,9 @@
 	// value.
 	Value interface{}
 	// NoIndex is whether the datastore cannot index this property.
-	// If NoIndex is set to false, []byte values are limited to 1500 bytes and
-	// string values are limited to 1500 bytes.
+	// If NoIndex is set to false, []byte and string values are limited to
+	// 1500 bytes.
 	NoIndex bool
-	// Multiple is whether the entity can have multiple properties with
-	// the same name. Even if a particular instance only has one property with
-	// a certain name, Multiple should be true if a struct would best represent
-	// it as a field of type []T instead of type T.
-	Multiple bool
 }
 
 // PropertyLoadSaver can be converted from and to a slice of Properties.
@@ -220,7 +215,7 @@
 			c.hasSlice = c.hasSlice || fIsSlice
 		}
 
-		if substructType != nil && substructType != typeOfTime {
+		if substructType != nil && substructType != typeOfTime && substructType != typeOfGeoPoint {
 			if name != "" {
 				name = name + "."
 			}
diff --git a/go/src/google.golang.org/cloud/datastore/query.go b/go/src/google.golang.org/cloud/datastore/query.go
index 032a9ec..6c5ea6b 100644
--- a/go/src/google.golang.org/cloud/datastore/query.go
+++ b/go/src/google.golang.org/cloud/datastore/query.go
@@ -294,10 +294,6 @@
 // Start returns a derivative query with the given start point.
 func (q *Query) Start(c Cursor) *Query {
 	q = q.clone()
-	if c.cc == nil {
-		q.err = errors.New("datastore: invalid cursor")
-		return q
-	}
 	q.start = c.cc
 	return q
 }
@@ -305,10 +301,6 @@
 // End returns a derivative query with the given end point.
 func (q *Query) End(c Cursor) *Query {
 	q = q.clone()
-	if c.cc == nil {
-		q.err = errors.New("datastore: invalid cursor")
-		return q
-	}
 	q.end = c.cc
 	return q
 }
@@ -342,7 +334,7 @@
 		if qf.FieldName == "" {
 			return errors.New("datastore: empty query filter field name")
 		}
-		v, err := interfaceToProto(reflect.ValueOf(qf.Value).Interface())
+		v, err := interfaceToProto(reflect.ValueOf(qf.Value).Interface(), false)
 		if err != nil {
 			return fmt.Errorf("datastore: bad query filter value type: %v", err)
 		}
@@ -409,61 +401,36 @@
 }
 
 // Count returns the number of results for the given query.
+//
+// The running time and number of API calls made by Count scale linearly with
+// with the sum of the query's offset and limit. Unless the result count is
+// expected to be small, it is best to specify a limit; otherwise Count will
+// continue until it finishes counting or the provided context expires.
 func (c *Client) Count(ctx context.Context, q *Query) (int, error) {
 	// Check that the query is well-formed.
 	if q.err != nil {
 		return 0, q.err
 	}
 
-	// Run a copy of the query, with keysOnly true (if we're not a projection,
+	// Create a copy of the query, with keysOnly true (if we're not a projection,
 	// since the two are incompatible).
 	newQ := q.clone()
 	newQ.keysOnly = len(newQ.projection) == 0
-	req := &pb.RunQueryRequest{
-		ProjectId: c.dataset,
-	}
 
-	if ns := ctxNamespace(ctx); ns != "" {
-		req.PartitionId = &pb.PartitionId{
-			NamespaceId: ns,
-		}
-	}
-	if err := newQ.toProto(req); err != nil {
-		return 0, err
-	}
-	resp, err := c.client.RunQuery(ctx, req)
-	if err != nil {
-		return 0, err
-	}
-	var n int
-	b := resp.Batch
+	// Create an iterator and use it to walk through the batches of results
+	// directly.
+	it := c.Run(ctx, newQ)
+	n := 0
 	for {
-		n += len(b.EntityResults)
-		if b.MoreResults != pb.QueryResultBatch_NOT_FINISHED {
-			break
+		err := it.nextBatch()
+		if err == Done {
+			return n, nil
 		}
-		var err error
-		// TODO(jbd): Support count queries that have a limit and an offset.
-		resp, err = callNext(ctx, c, req, resp, 0, 0)
 		if err != nil {
 			return 0, err
 		}
+		n += len(it.results)
 	}
-	return int(n), nil
-}
-
-// TODO(djd): This function is ugly in its current context. Refactor.
-func callNext(ctx context.Context, client *Client, req *pb.RunQueryRequest, resp *pb.RunQueryResponse, offset, limit int32) (*pb.RunQueryResponse, error) {
-	if resp.GetBatch().EndCursor == nil {
-		return nil, errors.New("datastore: internal error: server did not return a cursor")
-	}
-	q := req.GetQuery()
-	q.StartCursor = resp.Batch.EndCursor
-	q.Offset = offset
-	if limit >= 0 {
-		q.Limit = &wrapperspb.Int32Value{limit}
-	}
-	return client.client.RunQuery(ctx, req)
 }
 
 // GetAll runs the provided query in the given context and returns all keys
@@ -480,6 +447,12 @@
 // added to dst.
 //
 // If q is a ``keys-only'' query, GetAll ignores dst and only returns the keys.
+//
+// The running time and number of API calls made by GetAll scale linearly with
+// with the sum of the query's offset and limit. Unless the result count is
+// expected to be small, it is best to specify a limit; otherwise GetAll will
+// continue until it finishes collecting results or the provided context
+// expires.
 func (c *Client) GetAll(ctx context.Context, q *Query, dst interface{}) ([]*Key, error) {
 	var (
 		dv               reflect.Value
@@ -552,48 +525,24 @@
 		return &Iterator{err: q.err}
 	}
 	t := &Iterator{
-		ctx:    ctx,
-		client: c,
-		limit:  q.limit,
-		q:      q,
-		prevCC: q.start,
+		ctx:          ctx,
+		client:       c,
+		limit:        q.limit,
+		offset:       q.offset,
+		keysOnly:     q.keysOnly,
+		pageCursor:   q.start,
+		entityCursor: q.start,
+		req: &pb.RunQueryRequest{
+			ProjectId: c.dataset,
+		},
 	}
-	t.req.Reset()
-	t.req.ProjectId = c.dataset
 	if ns := ctxNamespace(ctx); ns != "" {
 		t.req.PartitionId = &pb.PartitionId{
 			NamespaceId: ns,
 		}
 	}
-	if err := q.toProto(&t.req); err != nil {
+	if err := q.toProto(t.req); err != nil {
 		t.err = err
-		return t
-	}
-	resp, err := c.client.RunQuery(ctx, &t.req)
-	if err != nil {
-		t.err = err
-		return t
-	}
-	t.res = *resp
-	b := t.res.GetBatch()
-	offset := q.offset - b.SkippedResults
-	for offset > 0 && b.MoreResults == pb.QueryResultBatch_NOT_FINISHED {
-		t.prevCC = b.EndCursor
-		resp, err := callNext(t.ctx, c, &t.req, &t.res, offset, t.limit)
-		if err != nil {
-			t.err = err
-			break
-		}
-		t.res = *resp
-		skip := b.SkippedResults
-		if skip < 0 {
-			t.err = errors.New("datastore: internal error: negative number of skipped_results")
-			break
-		}
-		offset -= skip
-	}
-	if offset < 0 {
-		t.err = errors.New("datastore: internal error: query offset was overshot")
 	}
 	return t
 }
@@ -603,20 +552,28 @@
 	ctx    context.Context
 	client *Client
 	err    error
-	// req is the request we sent previously, we need to keep track of it to resend it
-	req pb.RunQueryRequest
-	// res is the result of the most recent RunQuery or Next API call.
-	res pb.RunQueryResponse
-	// i is how many elements of res.Result we have iterated over.
-	i int
+
+	// results is the list of EntityResults still to be iterated over from the
+	// most recent API call. It will be nil if no requests have yet been issued.
+	results []*pb.EntityResult
+	// req is the request to send. It may be modified and used multiple times.
+	req *pb.RunQueryRequest
+
 	// limit is the limit on the number of results this iterator should return.
+	// The zero value is used to prevent further fetches from the server.
 	// A negative value means unlimited.
 	limit int32
-	// q is the original query which yielded this iterator.
-	q *Query
-	// prevCC is the compiled cursor that marks the end of the previous batch
-	// of results.
-	prevCC []byte
+	// offset is the number of results that still need to be skipped.
+	offset int32
+	// keysOnly records whether the query was keys-only (skip entity loading).
+	keysOnly bool
+
+	// pageCursor is the compiled cursor for the next batch/page of result.
+	// TODO(djd): Can we delete this in favour of paging with the last
+	// entityCursor from each batch?
+	pageCursor []byte
+	// entityCursor is the compiled cursor of the next result.
+	entityCursor []byte
 }
 
 // Done is returned when a query iteration has completed.
@@ -633,48 +590,28 @@
 	if err != nil {
 		return nil, err
 	}
-	if dst != nil && !t.q.keysOnly {
+	if dst != nil && !t.keysOnly {
 		err = loadEntity(dst, e)
 	}
 	return k, err
 }
 
 func (t *Iterator) next() (*Key, *pb.Entity, error) {
+	// Fetch additional batches while there are no more results.
+	for t.err == nil && len(t.results) == 0 {
+		t.err = t.nextBatch()
+	}
 	if t.err != nil {
 		return nil, nil, t.err
 	}
 
-	// Issue datastore_v3/Next RPCs as necessary.
-	b := t.res.GetBatch()
-	for t.i == len(b.EntityResults) {
-		if b.MoreResults != pb.QueryResultBatch_NOT_FINISHED {
-			t.err = Done
-			return nil, nil, t.err
-		}
-		t.prevCC = b.EndCursor
-		resp, err := callNext(t.ctx, t.client, &t.req, &t.res, 0, t.limit)
-		if err != nil {
-			t.err = err
-			return nil, nil, t.err
-		}
-		t.res = *resp
-		if b.SkippedResults != 0 {
-			t.err = errors.New("datastore: internal error: iterator has skipped results")
-			return nil, nil, t.err
-		}
-		t.i = 0
-		if t.limit >= 0 {
-			t.limit -= int32(len(b.EntityResults))
-			if t.limit < 0 {
-				t.err = errors.New("datastore: internal error: query returned more results than the limit")
-				return nil, nil, t.err
-			}
-		}
+	// Extract the next result, update cursors, and parse the entity's key.
+	e := t.results[0]
+	t.results = t.results[1:]
+	t.entityCursor = e.Cursor
+	if len(t.results) == 0 {
+		t.entityCursor = t.pageCursor // At the end of the batch.
 	}
-
-	// Extract the key from the t.i'th element of t.res.Result.
-	e := b.EntityResults[t.i]
-	t.i++
 	if e.Entity.Key == nil {
 		return nil, nil, errors.New("datastore: internal error: server did not return a key")
 	}
@@ -682,43 +619,94 @@
 	if err != nil || k.Incomplete() {
 		return nil, nil, errors.New("datastore: internal error: server returned an invalid key")
 	}
+
 	return k, e.Entity, nil
 }
 
+// nextBatch makes a single call to the server for a batch of results.
+func (t *Iterator) nextBatch() error {
+	if t.limit == 0 {
+		return Done // Short-circuits the zero-item response.
+	}
+
+	// Adjust the query with the latest start cursor, limit and offset.
+	q := t.req.GetQuery()
+	q.StartCursor = t.pageCursor
+	q.Offset = t.offset
+	if t.limit >= 0 {
+		q.Limit = &wrapperspb.Int32Value{t.limit}
+	} else {
+		q.Limit = nil
+	}
+
+	// Run the query.
+	resp, err := t.client.client.RunQuery(t.ctx, t.req)
+	if err != nil {
+		return err
+	}
+
+	// Adjust any offset from skipped results.
+	skip := resp.Batch.SkippedResults
+	if skip < 0 {
+		return errors.New("datastore: internal error: negative number of skipped_results")
+	}
+	t.offset -= skip
+	if t.offset < 0 {
+		return errors.New("datastore: internal error: query skipped too many results")
+	}
+	if t.offset > 0 && len(resp.Batch.EntityResults) > 0 {
+		return errors.New("datastore: internal error: query returned results before requested offset")
+	}
+
+	// Adjust the limit.
+	if t.limit >= 0 {
+		t.limit -= int32(len(resp.Batch.EntityResults))
+		if t.limit < 0 {
+			return errors.New("datastore: internal error: query returned more results than the limit")
+		}
+	}
+
+	// If there are no more results available, set limit to zero to prevent
+	// further fetches. Otherwise, check that there is a next page cursor available.
+	if resp.Batch.MoreResults != pb.QueryResultBatch_NOT_FINISHED {
+		t.limit = 0
+	} else if resp.Batch.EndCursor == nil {
+		return errors.New("datastore: internal error: server did not return a cursor")
+	}
+
+	// Update cursors.
+	// If any results were skipped, use the SkippedCursor as the next entity cursor.
+	if skip > 0 {
+		t.entityCursor = resp.Batch.SkippedCursor
+	} else {
+		t.entityCursor = q.StartCursor
+	}
+	t.pageCursor = resp.Batch.EndCursor
+
+	t.results = resp.Batch.EntityResults
+	return nil
+}
+
 // Cursor returns a cursor for the iterator's current location.
 func (t *Iterator) Cursor() (Cursor, error) {
+	// If there is still an offset, we need to the skip those results first.
+	for t.err == nil && t.offset > 0 {
+		t.err = t.nextBatch()
+	}
+
 	if t.err != nil && t.err != Done {
 		return Cursor{}, t.err
 	}
-	// If we are at either end of the current batch of results,
-	// return the compiled cursor at that end.
-	b := t.res.Batch
-	if t.i == 0 {
-		if b.SkippedResults > 0 {
-			return Cursor{b.SkippedCursor}, nil
-		}
-		if t.prevCC == nil {
-			// A nil pointer (of type *pb.CompiledCursor) means no constraint:
-			// passing it as the end cursor of a new query means unlimited results
-			// (glossing over the integer limit parameter for now).
-			// A non-nil pointer to an empty pb.CompiledCursor means the start:
-			// passing it as the end cursor of a new query means 0 results.
-			// If prevCC was nil, then the original query had no start cursor, but
-			// Iterator.Cursor should return "the start" instead of unlimited.
-			return Cursor{}, nil
-		}
-		return Cursor{t.prevCC}, nil
-	}
-	if t.i == len(b.EntityResults) {
-		return Cursor{b.EndCursor}, nil
-	}
-	// Otherwise, return the cursor associated with the current result.
-	return Cursor{b.EntityResults[t.i-1].Cursor}, nil
+
+	return Cursor{t.entityCursor}, nil
 }
 
 // Cursor is an iterator's position. It can be converted to and from an opaque
 // string. A cursor can be used from different HTTP requests, but only with a
 // query with the same kind, ancestor, filter and order constraints.
+//
+// The zero Cursor can be used to indicate that there is no start and/or end
+// constraint for a query.
 type Cursor struct {
 	cc []byte
 }
diff --git a/go/src/google.golang.org/cloud/datastore/save.go b/go/src/google.golang.org/cloud/datastore/save.go
index 66a23c5..fdfd1c1 100644
--- a/go/src/google.golang.org/cloud/datastore/save.go
+++ b/go/src/google.golang.org/cloud/datastore/save.go
@@ -22,6 +22,7 @@
 
 	timepb "github.com/golang/protobuf/ptypes/timestamp"
 	pb "google.golang.org/cloud/datastore/internal/proto"
+	tpb "google.golang.org/cloud/datastore/internal/type_proto"
 )
 
 // saveEntity saves an EntityProto into a PropertyLoadSaver or struct pointer.
@@ -39,15 +40,15 @@
 	return propertiesToProto(key, props)
 }
 
-func saveStructProperty(props *[]Property, name string, noIndex, multiple bool, v reflect.Value) error {
+// TODO(djd): Convert this and below to return ([]Property, error).
+func saveStructProperty(props *[]Property, name string, noIndex bool, v reflect.Value) error {
 	p := Property{
-		Name:     name,
-		NoIndex:  noIndex,
-		Multiple: multiple,
+		Name:    name,
+		NoIndex: noIndex,
 	}
 
 	switch x := v.Interface().(type) {
-	case *Key, time.Time:
+	case *Key, time.Time, GeoPoint:
 		p.Value = x
 	default:
 		switch v.Kind() {
@@ -62,6 +63,8 @@
 		case reflect.Slice:
 			if v.Type().Elem().Kind() == reflect.Uint8 {
 				p.Value = v.Bytes()
+			} else {
+				return saveSliceProperty(props, name, noIndex, v)
 			}
 		case reflect.Struct:
 			if !v.CanAddr() {
@@ -71,7 +74,7 @@
 			if err != nil {
 				return fmt.Errorf("datastore: unsupported struct field: %v", err)
 			}
-			return sub.(structPLS).save(props, name, noIndex, multiple)
+			return sub.(structPLS).save(props, name, noIndex)
 		}
 	}
 	if p.Value == nil {
@@ -81,15 +84,57 @@
 	return nil
 }
 
+func saveSliceProperty(props *[]Property, name string, noIndex bool, v reflect.Value) error {
+	// Easy case: if the slice is empty, we're done.
+	if v.Len() == 0 {
+		return nil
+	}
+	// Work out the properties generated by the first element in the slice. This will
+	// usually be a single property, but will be more if this is a slice of structs.
+	var headProps []Property
+	if err := saveStructProperty(&headProps, name, noIndex, v.Index(0)); err != nil {
+		return err
+	}
+
+	// Convert the first element's properties into slice properties, and
+	// keep track of the values in a map.
+	values := make(map[string][]interface{}, len(headProps))
+	for _, p := range headProps {
+		values[p.Name] = append(make([]interface{}, 0, v.Len()), p.Value)
+	}
+
+	// Find the elements for the subsequent elements.
+	for i := 1; i < v.Len(); i++ {
+		elemProps := make([]Property, 0, len(headProps))
+		if err := saveStructProperty(&elemProps, name, noIndex, v.Index(i)); err != nil {
+			return err
+		}
+		for _, p := range elemProps {
+			v, ok := values[p.Name]
+			if !ok {
+				return fmt.Errorf("datastore: unexpected property %q in elem %d of slice", p.Name, i)
+			}
+			values[p.Name] = append(v, p.Value)
+		}
+	}
+
+	// Convert to the final properties.
+	for _, p := range headProps {
+		p.Value = values[p.Name]
+		*props = append(*props, p)
+	}
+	return nil
+}
+
 func (s structPLS) Save() ([]Property, error) {
 	var props []Property
-	if err := s.save(&props, "", false, false); err != nil {
+	if err := s.save(&props, "", false); err != nil {
 		return nil, err
 	}
 	return props, nil
 }
 
-func (s structPLS) save(props *[]Property, prefix string, noIndex, multiple bool) error {
+func (s structPLS) save(props *[]Property, prefix string, noIndex bool) error {
 	for i, t := range s.codec.byIndex {
 		if t.name == "-" {
 			continue
@@ -103,17 +148,7 @@
 			continue
 		}
 		noIndex1 := noIndex || t.noIndex
-		// For slice fields that aren't []byte, save each element.
-		if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Uint8 {
-			for j := 0; j < v.Len(); j++ {
-				if err := saveStructProperty(props, name, noIndex1, true, v.Index(j)); err != nil {
-					return err
-				}
-			}
-			continue
-		}
-		// Otherwise, save the field itself.
-		if err := saveStructProperty(props, name, noIndex1, multiple, v); err != nil {
+		if err := saveStructProperty(props, name, noIndex1, v); err != nil {
 			return err
 		}
 	}
@@ -126,9 +161,8 @@
 		Properties: map[string]*pb.Value{},
 	}
 	indexedProps := 0
-	prevMultiple := make(map[string]*pb.Value)
 	for _, p := range props {
-		val, err := interfaceToProto(p.Value)
+		val, err := interfaceToProto(p.Value, p.NoIndex)
 		if err != nil {
 			return nil, fmt.Errorf("datastore: %v for a Property with Name %q", err, p.Name)
 		}
@@ -143,30 +177,6 @@
 		if indexedProps > maxIndexedProperties {
 			return nil, errors.New("datastore: too many indexed properties")
 		}
-		switch v := p.Value.(type) {
-		case string:
-			if len(v) > 1500 && !p.NoIndex {
-				return nil, fmt.Errorf("datastore: Property with Name %q is too long to index", p.Name)
-			}
-		case []byte:
-			if len(v) > 1500 && !p.NoIndex {
-				return nil, fmt.Errorf("datastore: Property with Name %q is too long to index", p.Name)
-			}
-		}
-		val.ExcludeFromIndexes = p.NoIndex
-		if p.Multiple {
-			if varr, ok := prevMultiple[p.Name]; ok {
-				arr := varr.ValueType.(*pb.Value_ArrayValue).ArrayValue
-				arr.Values = append(arr.Values, val)
-				continue
-			}
-			val = &pb.Value{
-				ValueType: &pb.Value_ArrayValue{&pb.ArrayValue{
-					Values: []*pb.Value{val},
-				}},
-			}
-			prevMultiple[p.Name] = val
-		}
 
 		if _, ok := e.Properties[p.Name]; ok {
 			return nil, fmt.Errorf("datastore: duplicate Property with Name %q", p.Name)
@@ -176,8 +186,8 @@
 	return e, nil
 }
 
-func interfaceToProto(iv interface{}) (*pb.Value, error) {
-	val := new(pb.Value)
+func interfaceToProto(iv interface{}, noIndex bool) (*pb.Value, error) {
+	val := &pb.Value{ExcludeFromIndexes: noIndex}
 	switch v := iv.(type) {
 	case int:
 		val.ValueType = &pb.Value_IntegerValue{int64(v)}
@@ -188,6 +198,9 @@
 	case bool:
 		val.ValueType = &pb.Value_BooleanValue{v}
 	case string:
+		if len(v) > 1500 && !noIndex {
+			return nil, errors.New("string property too long to index")
+		}
 		val.ValueType = &pb.Value_StringValue{v}
 	case float32:
 		val.ValueType = &pb.Value_DoubleValue{float64(v)}
@@ -197,6 +210,14 @@
 		if v != nil {
 			val.ValueType = &pb.Value_KeyValue{keyToProto(v)}
 		}
+	case GeoPoint:
+		if !v.Valid() {
+			return nil, errors.New("invalid GeoPoint value")
+		}
+		val.ValueType = &pb.Value_GeoPointValue{&tpb.LatLng{
+			Latitude:  v.Lat,
+			Longitude: v.Lng,
+		}}
 	case time.Time:
 		if v.Before(minTime) || v.After(maxTime) {
 			return nil, errors.New("time value out of range")
@@ -206,13 +227,29 @@
 			Nanos:   int32(v.Nanosecond()),
 		}}
 	case []byte:
+		if len(v) > 1500 && !noIndex {
+			return nil, errors.New("[]byte property too long to index")
+		}
 		val.ValueType = &pb.Value_BlobValue{v}
+	case []interface{}:
+		arr := make([]*pb.Value, 0, len(v))
+		for i, v := range v {
+			elem, err := interfaceToProto(v, noIndex)
+			if err != nil {
+				return nil, fmt.Errorf("%v at index %d", err, i)
+			}
+			arr = append(arr, elem)
+		}
+		val.ValueType = &pb.Value_ArrayValue{&pb.ArrayValue{arr}}
+		// ArrayValues have ExcludeFromIndexes set on the individual items, rather
+		// than the top-level value.
+		val.ExcludeFromIndexes = false
 	default:
 		if iv != nil {
 			return nil, fmt.Errorf("invalid Value type %t", iv)
 		}
+		val.ValueType = &pb.Value_NullValue{}
 	}
-	// TODO(jbd): Support ListValue and EntityValue.
-	// TODO(jbd): Support types whose underlying type is one of the types above.
+	// TODO(jbd): Support EntityValue.
 	return val, nil
 }
diff --git a/go/src/google.golang.org/cloud/examples/bigquery/concat_table/main.go b/go/src/google.golang.org/cloud/examples/bigquery/concat_table/main.go
index 978801b..3057b18 100644
--- a/go/src/google.golang.org/cloud/examples/bigquery/concat_table/main.go
+++ b/go/src/google.golang.org/cloud/examples/bigquery/concat_table/main.go
@@ -24,7 +24,6 @@
 	"time"
 
 	"golang.org/x/net/context"
-	"golang.org/x/oauth2/google"
 	"google.golang.org/cloud/bigquery"
 )
 
@@ -54,12 +53,8 @@
 		log.Fatalf("Different values must be supplied for each of --src1, --src2 and --dest")
 	}
 
-	httpClient, err := google.DefaultClient(context.Background(), bigquery.Scope)
-	if err != nil {
-		log.Fatalf("Creating http client: %v", err)
-	}
-
-	client, err := bigquery.NewClient(httpClient, *project)
+	ctx := context.Background()
+	client, err := bigquery.NewClient(ctx, *project)
 	if err != nil {
 		log.Fatalf("Creating bigquery client: %v", err)
 	}
@@ -83,7 +78,7 @@
 	}
 
 	// Concatenate data.
-	job, err := client.Copy(context.Background(), d, bigquery.Tables{s1, s2}, bigquery.WriteTruncate)
+	job, err := client.Copy(ctx, d, bigquery.Tables{s1, s2}, bigquery.WriteTruncate)
 
 	if err != nil {
 		log.Fatalf("Concatenating: %v", err)
@@ -93,7 +88,7 @@
 	fmt.Printf("Waiting for job to complete.\n")
 
 	for range time.Tick(*pollint) {
-		status, err := job.Status(context.Background())
+		status, err := job.Status(ctx)
 		if err != nil {
 			fmt.Printf("Failure determining status: %v", err)
 			break
diff --git a/go/src/google.golang.org/cloud/examples/bigquery/load/main.go b/go/src/google.golang.org/cloud/examples/bigquery/load/main.go
index 30ed9db..130cbc8 100644
--- a/go/src/google.golang.org/cloud/examples/bigquery/load/main.go
+++ b/go/src/google.golang.org/cloud/examples/bigquery/load/main.go
@@ -24,7 +24,6 @@
 	"time"
 
 	"golang.org/x/net/context"
-	"golang.org/x/oauth2/google"
 	"google.golang.org/cloud/bigquery"
 )
 
@@ -52,12 +51,8 @@
 		os.Exit(1)
 	}
 
-	httpClient, err := google.DefaultClient(context.Background(), bigquery.Scope)
-	if err != nil {
-		log.Fatalf("Creating http client: %v", err)
-	}
-
-	client, err := bigquery.NewClient(httpClient, *project)
+	ctx := context.Background()
+	client, err := bigquery.NewClient(ctx, *project)
 	if err != nil {
 		log.Fatalf("Creating bigquery client: %v", err)
 	}
@@ -73,7 +68,7 @@
 
 	// Load data from Google Cloud Storage into a BigQuery table.
 	job, err := client.Copy(
-		context.Background(), table, gcs,
+		ctx, table, gcs,
 		bigquery.MaxBadRecords(1),
 		bigquery.AllowQuotedNewlines(),
 		bigquery.WriteTruncate)
@@ -86,7 +81,7 @@
 	fmt.Printf("Waiting for job to complete.\n")
 
 	for range time.Tick(*pollint) {
-		status, err := job.Status(context.Background())
+		status, err := job.Status(ctx)
 		if err != nil {
 			fmt.Printf("Failure determining status: %v", err)
 			break
diff --git a/go/src/google.golang.org/cloud/examples/bigquery/query/main.go b/go/src/google.golang.org/cloud/examples/bigquery/query/main.go
index d6dc0b6..abc02a1 100644
--- a/go/src/google.golang.org/cloud/examples/bigquery/query/main.go
+++ b/go/src/google.golang.org/cloud/examples/bigquery/query/main.go
@@ -24,7 +24,6 @@
 	"time"
 
 	"golang.org/x/net/context"
-	"golang.org/x/oauth2/google"
 	"google.golang.org/cloud/bigquery"
 )
 
@@ -51,12 +50,8 @@
 		os.Exit(1)
 	}
 
-	httpClient, err := google.DefaultClient(context.Background(), bigquery.Scope)
-	if err != nil {
-		log.Fatalf("Creating http client: %v", err)
-	}
-
-	client, err := bigquery.NewClient(httpClient, *project)
+	ctx := context.Background()
+	client, err := bigquery.NewClient(ctx, *project)
 	if err != nil {
 		log.Fatalf("Creating bigquery client: %v", err)
 	}
@@ -76,7 +71,7 @@
 	}
 
 	// Query data.
-	job, err := client.Copy(context.Background(), d, query, bigquery.WriteTruncate)
+	job, err := client.Copy(ctx, d, query, bigquery.WriteTruncate)
 
 	if err != nil {
 		log.Fatalf("Querying: %v", err)
@@ -90,7 +85,7 @@
 	fmt.Printf("Waiting for job to complete.\n")
 
 	for range time.Tick(*pollint) {
-		status, err := job.Status(context.Background())
+		status, err := job.Status(ctx)
 		if err != nil {
 			fmt.Printf("Failure determining status: %v", err)
 			break
diff --git a/go/src/google.golang.org/cloud/examples/bigquery/read/main.go b/go/src/google.golang.org/cloud/examples/bigquery/read/main.go
index 181bd7c..5380199 100644
--- a/go/src/google.golang.org/cloud/examples/bigquery/read/main.go
+++ b/go/src/google.golang.org/cloud/examples/bigquery/read/main.go
@@ -26,7 +26,6 @@
 	"text/tabwriter"
 
 	"golang.org/x/net/context"
-	"golang.org/x/oauth2/google"
 	"google.golang.org/cloud/bigquery"
 )
 
@@ -38,11 +37,11 @@
 		" If set, --dataset, --table will be ignored, and results will be read from the specified job.")
 )
 
-func printValues(it *bigquery.Iterator) {
+func printValues(ctx context.Context, it *bigquery.Iterator) {
 	// one-space padding.
 	tw := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
 
-	for it.Next(context.Background()) {
+	for it.Next(ctx) {
 		var vals bigquery.ValueList
 		if err := it.Get(&vals); err != nil {
 			fmt.Printf("err calling get: %v\n", err)
@@ -63,30 +62,30 @@
 	}
 }
 
-func printTable(client *bigquery.Client, t *bigquery.Table) {
-	it, err := client.Read(context.Background(), t)
+func printTable(ctx context.Context, client *bigquery.Client, t *bigquery.Table) {
+	it, err := client.Read(ctx, t)
 	if err != nil {
 		log.Fatalf("Reading: %v", err)
 	}
 
 	id := t.FullyQualifiedName()
 	fmt.Printf("%s\n%s\n", id, strings.Repeat("-", len(id)))
-	printValues(it)
+	printValues(ctx, it)
 }
 
-func printQueryResults(client *bigquery.Client, queryJobID string) {
-	job, err := client.JobFromID(context.Background(), queryJobID)
+func printQueryResults(ctx context.Context, client *bigquery.Client, queryJobID string) {
+	job, err := client.JobFromID(ctx, queryJobID)
 	if err != nil {
 		log.Fatalf("Loading job: %v", err)
 	}
 
-	it, err := client.Read(context.Background(), job)
+	it, err := client.Read(ctx, job)
 	if err != nil {
 		log.Fatalf("Reading: %v", err)
 	}
 
 	// TODO: print schema.
-	printValues(it)
+	printValues(ctx, it)
 }
 
 func main() {
@@ -114,24 +113,20 @@
 		os.Exit(1)
 	}
 
+	ctx := context.Background()
 	tableRE, err := regexp.Compile(*table)
 	if err != nil {
 		fmt.Fprintf(os.Stderr, "--table is not a valid regular expression: %q\n", *table)
 		os.Exit(1)
 	}
 
-	httpClient, err := google.DefaultClient(context.Background(), bigquery.Scope)
-	if err != nil {
-		log.Fatalf("Creating http client: %v", err)
-	}
-
-	client, err := bigquery.NewClient(httpClient, *project)
+	client, err := bigquery.NewClient(ctx, *project)
 	if err != nil {
 		log.Fatalf("Creating bigquery client: %v", err)
 	}
 
 	if *jobID != "" {
-		printQueryResults(client, *jobID)
+		printQueryResults(ctx, client, *jobID)
 		return
 	}
 	ds := client.Dataset(*dataset)
@@ -142,7 +137,7 @@
 	}
 	for _, t := range tables {
 		if tableRE.MatchString(t.TableID) {
-			printTable(client, t)
+			printTable(ctx, client, t)
 		}
 	}
 }
diff --git a/go/src/google.golang.org/cloud/examples/pubsub/cmdline/main.go b/go/src/google.golang.org/cloud/examples/pubsub/cmdline/main.go
index 90b5666..8e9c543 100644
--- a/go/src/google.golang.org/cloud/examples/pubsub/cmdline/main.go
+++ b/go/src/google.golang.org/cloud/examples/pubsub/cmdline/main.go
@@ -82,25 +82,35 @@
 }
 
 func listTopics(client *pubsub.Client, argv []string) {
+	ctx := context.Background()
 	checkArgs(argv, 1)
-	topics, err := client.Topics(context.Background())
-	if err != nil {
-		log.Fatalf("Listing topics failed: %v", err)
-	}
-	for _, t := range topics {
-		fmt.Println(t.Name())
+	topics := client.Topics(ctx)
+	for {
+		switch topic, err := topics.Next(); err {
+		case nil:
+			fmt.Println(topic.Name())
+		case pubsub.Done:
+			return
+		default:
+			log.Fatalf("Listing topics failed: %v", err)
+		}
 	}
 }
 
 func listTopicSubscriptions(client *pubsub.Client, argv []string) {
+	ctx := context.Background()
 	checkArgs(argv, 2)
 	topic := argv[1]
-	subs, err := client.Topic(topic).Subscriptions(context.Background())
-	if err != nil {
-		log.Fatalf("Listing subscriptions failed: %v", err)
-	}
-	for _, s := range subs {
-		fmt.Println(s.Name())
+	subs := client.Topic(topic).Subscriptions(ctx)
+	for {
+		switch sub, err := subs.Next(); err {
+		case nil:
+			fmt.Println(sub.Name())
+		case pubsub.Done:
+			return
+		default:
+			log.Fatalf("Listing subscriptions failed: %v", err)
+		}
 	}
 }
 
@@ -173,13 +183,18 @@
 }
 
 func listSubscriptions(client *pubsub.Client, argv []string) {
+	ctx := context.Background()
 	checkArgs(argv, 1)
-	subs, err := client.Subscriptions(context.Background())
-	if err != nil {
-		log.Fatalf("Listing subscriptions failed: %v", err)
-	}
-	for _, s := range subs {
-		fmt.Println(s.Name())
+	subs := client.Subscriptions(ctx)
+	for {
+		switch sub, err := subs.Next(); err {
+		case nil:
+			fmt.Println(sub.Name())
+		case pubsub.Done:
+			return
+		default:
+			log.Fatalf("Listing subscriptions failed: %v", err)
+		}
 	}
 }
 
@@ -281,11 +296,11 @@
 }
 
 // publish publishes a series of messages to the named topic.
-func publishMessageBatches(client *pubsub.Client, topicName string, workerid int, rep *reporter) {
+func publishMessageBatches(client *pubsub.Client, topicName string, workerID int, rep *reporter) {
 	var r uint64
 	topic := client.Topic(topicName)
 	for !shouldQuit() {
-		msgPrefix := fmt.Sprintf("Worker: %d, Round: %d,", workerid, r)
+		msgPrefix := fmt.Sprintf("Worker: %d, Round: %d,", workerID, r)
 		if _, err := topic.Publish(context.Background(), genMessages(msgPrefix)...); err != nil {
 			log.Printf("Publish failed, %v\n", err)
 			return
diff --git a/go/src/google.golang.org/cloud/examples/pubsub/consumer/main.go b/go/src/google.golang.org/cloud/examples/pubsub/consumer/main.go
index 9582763..72a4659 100644
--- a/go/src/google.golang.org/cloud/examples/pubsub/consumer/main.go
+++ b/go/src/google.golang.org/cloud/examples/pubsub/consumer/main.go
@@ -19,6 +19,8 @@
 	"flag"
 	"fmt"
 	"log"
+	"os"
+	"os/signal"
 	"time"
 
 	"golang.org/x/net/context"
@@ -42,6 +44,9 @@
 		log.Fatal("-s is required")
 	}
 
+	quit := make(chan os.Signal, 1)
+	signal.Notify(quit, os.Interrupt)
+
 	ctx := context.Background()
 
 	client, err := pubsub.NewClient(ctx, *projID)
@@ -58,8 +63,16 @@
 	}
 	defer it.Stop()
 
+	go func() {
+		<-quit
+		it.Stop()
+	}()
+
 	for i := 0; i < *numConsume; i++ {
 		m, err := it.Next()
+		if err == pubsub.Done {
+			break
+		}
 		if err != nil {
 			fmt.Printf("advancing iterator: %v", err)
 			break
diff --git a/go/src/google.golang.org/cloud/examples/storage/appengine/app.go b/go/src/google.golang.org/cloud/examples/storage/appengine/app.go
index 6e598c3..6bbef1b 100644
--- a/go/src/google.golang.org/cloud/examples/storage/appengine/app.go
+++ b/go/src/google.golang.org/cloud/examples/storage/appengine/app.go
@@ -14,6 +14,10 @@
 
 //[START sample]
 // Package gcsdemo is an example App Engine app using the Google Cloud Storage API.
+//
+// NOTE: the google.golang.org/cloud/storage package is not compatible with
+// dev_appserver.py, so this example will not work in a local development
+// environment.
 package gcsdemo
 
 //[START imports]
@@ -46,7 +50,7 @@
 	bucket *storage.BucketHandle
 	client *storage.Client
 
-	w   http.ResponseWriter
+	w   io.Writer
 	ctx context.Context
 	// cleanUp is a list of filenames that need cleaning up at the end of the demo.
 	cleanUp []string
@@ -56,6 +60,7 @@
 
 func (d *demo) errorf(format string, args ...interface{}) {
 	d.failed = true
+	fmt.Fprintln(d.w, fmt.Sprintf(format, args...))
 	log.Errorf(d.ctx, format, args...)
 }
 
@@ -65,6 +70,11 @@
 		http.NotFound(w, r)
 		return
 	}
+	if appengine.IsDevAppServer() {
+		http.Error(w, "This example does not work with dev_appserver.py", http.StatusNotImplemented)
+	}
+
+	//[START get_default_bucket]
 	ctx := appengine.NewContext(r)
 	if bucket == "" {
 		var err error
@@ -73,10 +83,11 @@
 			return
 		}
 	}
+	//[END get_default_bucket]
 
 	client, err := storage.NewClient(ctx)
 	if err != nil {
-		log.Errorf(ctx, "failed to get default GCS bucket name: %v", err)
+		log.Errorf(ctx, "failed to create client: %v", err)
 		return
 	}
 	defer client.Close()
@@ -85,8 +96,9 @@
 	fmt.Fprintf(w, "Demo GCS Application running from Version: %v\n", appengine.VersionID(ctx))
 	fmt.Fprintf(w, "Using bucket name: %v\n\n", bucket)
 
+	buf := &bytes.Buffer{}
 	d := &demo{
-		w:      w,
+		w:      buf,
 		ctx:    ctx,
 		client: client,
 		bucket: client.Bucket(bucket),
@@ -112,9 +124,13 @@
 	d.deleteFiles()
 
 	if d.failed {
-		io.WriteString(w, "\nDemo failed.\n")
+		w.WriteHeader(http.StatusInternalServerError)
+		buf.WriteTo(w)
+		fmt.Fprintf(w, "\nDemo failed.\n")
 	} else {
-		io.WriteString(w, "\nDemo succeeded.\n")
+		w.WriteHeader(http.StatusOK)
+		buf.WriteTo(w)
+		fmt.Fprintf(w, "\nDemo succeeded.\n")
 	}
 }
 
@@ -174,6 +190,7 @@
 
 //[END read]
 
+//[START copy]
 // copyFile copies a file in Google Cloud Storage.
 func (d *demo) copyFile(fileName string) {
 	copyName := fileName + "-copy"
@@ -189,6 +206,8 @@
 	d.dumpStats(obj)
 }
 
+//[END copy]
+
 func (d *demo) dumpStats(obj *storage.ObjectAttrs) {
 	fmt.Fprintf(d.w, "(filename: /%v/%v, ", obj.Bucket, obj.Name)
 	fmt.Fprintf(d.w, "ContentType: %q, ", obj.ContentType)
@@ -207,6 +226,7 @@
 	fmt.Fprintf(d.w, "Updated: %v)\n", obj.Updated)
 }
 
+//[START file_metadata]
 // statFile reads the stats of the named file in Google Cloud Storage.
 func (d *demo) statFile(fileName string) {
 	io.WriteString(d.w, "\nFile stat:\n")
@@ -220,6 +240,8 @@
 	d.dumpStats(obj)
 }
 
+//[END file_metadata]
+
 // createListFiles creates files that will be used by listBucket.
 func (d *demo) createListFiles() {
 	io.WriteString(d.w, "\nCreating more files for listbucket...\n")
@@ -228,6 +250,7 @@
 	}
 }
 
+//[START list_bucket]
 // listBucket lists the contents of a bucket in Google Cloud Storage.
 func (d *demo) listBucket() {
 	io.WriteString(d.w, "\nListbucket result:\n")
@@ -247,6 +270,8 @@
 	}
 }
 
+//[END list_bucket]
+
 func (d *demo) listDir(name, indent string) {
 	query := &storage.Query{Prefix: name, Delimiter: "/"}
 	for query != nil {
diff --git a/go/src/google.golang.org/cloud/examples/storage/appenginevm/app.go b/go/src/google.golang.org/cloud/examples/storage/appenginevm/app.go
index b6aa0cb..ff819de 100644
--- a/go/src/google.golang.org/cloud/examples/storage/appenginevm/app.go
+++ b/go/src/google.golang.org/cloud/examples/storage/appenginevm/app.go
@@ -15,6 +15,7 @@
 // Package main is an example Mananged VM app using the Google Cloud Storage API.
 package main
 
+//[START imports]
 import (
 	"bytes"
 	"fmt"
@@ -30,6 +31,8 @@
 	"google.golang.org/cloud/storage"
 )
 
+//[END imports]
+
 // bucket is a local cache of the app's default bucket name.
 var bucket string // or: var bucket = "<your-app-id>.appspot.com"
 
@@ -38,12 +41,13 @@
 	appengine.Main()
 }
 
+//[START bucket_struct]
 // demo struct holds information needed to run the various demo functions.
 type demo struct {
 	bucket *storage.BucketHandle
 	client *storage.Client
 
-	w   http.ResponseWriter
+	w   io.Writer
 	ctx context.Context
 	// cleanUp is a list of filenames that need cleaning up at the end of the demo.
 	cleanUp []string
@@ -51,8 +55,11 @@
 	failed bool
 }
 
+//[END bucket_struct]
+
 func (d *demo) errorf(format string, args ...interface{}) {
 	d.failed = true
+	fmt.Fprintln(d.w, fmt.Sprintf(format, args...))
 	log.Errorf(d.ctx, format, args...)
 }
 
@@ -62,6 +69,8 @@
 		http.NotFound(w, r)
 		return
 	}
+
+	//[START get_default_bucket]
 	ctx := appengine.NewContext(r)
 	if bucket == "" {
 		var err error
@@ -70,10 +79,11 @@
 			return
 		}
 	}
+	//[END get_default_bucket]
 
 	client, err := storage.NewClient(ctx)
 	if err != nil {
-		log.Errorf(ctx, "failed to get default GCS bucket name: %v", err)
+		log.Errorf(ctx, "failed to create client: %v", err)
 		return
 	}
 	defer client.Close()
@@ -82,8 +92,9 @@
 	fmt.Fprintf(w, "Demo GCS Application running from Version: %v\n", appengine.VersionID(ctx))
 	fmt.Fprintf(w, "Using bucket name: %v\n\n", bucket)
 
+	buf := &bytes.Buffer{}
 	d := &demo{
-		w:      w,
+		w:      buf,
 		ctx:    ctx,
 		client: client,
 		bucket: client.Bucket(bucket),
@@ -109,12 +120,17 @@
 	d.deleteFiles()
 
 	if d.failed {
-		io.WriteString(w, "\nDemo failed.\n")
+		w.WriteHeader(http.StatusInternalServerError)
+		buf.WriteTo(w)
+		fmt.Fprintf(w, "\nDemo failed.\n")
 	} else {
-		io.WriteString(w, "\nDemo succeeded.\n")
+		w.WriteHeader(http.StatusOK)
+		buf.WriteTo(w)
+		fmt.Fprintf(w, "\nDemo succeeded.\n")
 	}
 }
 
+//[START write]
 // createFile creates a file in Google Cloud Storage.
 func (d *demo) createFile(fileName string) {
 	fmt.Fprintf(d.w, "Creating file /%v/%v\n", bucket, fileName)
@@ -141,6 +157,9 @@
 	}
 }
 
+//[END write]
+
+//[START read]
 // readFile reads the named file in Google Cloud Storage.
 func (d *demo) readFile(fileName string) {
 	io.WriteString(d.w, "\nAbbreviated file content (first line and last 1K):\n")
@@ -165,6 +184,9 @@
 	}
 }
 
+//[END read]
+
+//[START copy]
 // copyFile copies a file in Google Cloud Storage.
 func (d *demo) copyFile(fileName string) {
 	copyName := fileName + "-copy"
@@ -180,6 +202,8 @@
 	d.dumpStats(obj)
 }
 
+//[END copy]
+
 func (d *demo) dumpStats(obj *storage.ObjectAttrs) {
 	fmt.Fprintf(d.w, "(filename: /%v/%v, ", obj.Bucket, obj.Name)
 	fmt.Fprintf(d.w, "ContentType: %q, ", obj.ContentType)
@@ -198,6 +222,7 @@
 	fmt.Fprintf(d.w, "Updated: %v)\n", obj.Updated)
 }
 
+//[START file_metadata]
 // statFile reads the stats of the named file in Google Cloud Storage.
 func (d *demo) statFile(fileName string) {
 	io.WriteString(d.w, "\nFile stat:\n")
@@ -211,6 +236,8 @@
 	d.dumpStats(obj)
 }
 
+//[END file_metadata]
+
 // createListFiles creates files that will be used by listBucket.
 func (d *demo) createListFiles() {
 	io.WriteString(d.w, "\nCreating more files for listbucket...\n")
@@ -219,6 +246,7 @@
 	}
 }
 
+//[START list_bucket]
 // listBucket lists the contents of a bucket in Google Cloud Storage.
 func (d *demo) listBucket() {
 	io.WriteString(d.w, "\nListbucket result:\n")
@@ -238,6 +266,8 @@
 	}
 }
 
+//[END list_bucket]
+
 func (d *demo) listDir(name, indent string) {
 	query := &storage.Query{Prefix: name, Delimiter: "/"}
 	for query != nil {
@@ -385,6 +415,7 @@
 	d.dumpACL(fileName)
 }
 
+//[START delete]
 // deleteFiles deletes all the temporary files from a bucket created by this demo.
 func (d *demo) deleteFiles() {
 	io.WriteString(d.w, "\nDeleting files...\n")
@@ -396,3 +427,5 @@
 		}
 	}
 }
+
+//[END delete]
diff --git a/go/src/google.golang.org/cloud/internal/transport/cancelreq_legacy.go b/go/src/google.golang.org/cloud/internal/transport/cancelreq_legacy.go
deleted file mode 100644
index c11a4dd..0000000
--- a/go/src/google.golang.org/cloud/internal/transport/cancelreq_legacy.go
+++ /dev/null
@@ -1,31 +0,0 @@
-// Copyright 2015 Google Inc. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//      http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// +build !go1.5
-
-package transport
-
-import "net/http"
-
-// makeReqCancel returns a closure that cancels the given http.Request
-// when called.
-func makeReqCancel(req *http.Request) func(http.RoundTripper) {
-	// Go 1.4 and prior do not have a reliable way of cancelling a request.
-	// Transport.CancelRequest will only work if the request is already in-flight.
-	return func(r http.RoundTripper) {
-		if t, ok := r.(*http.Transport); ok {
-			t.CancelRequest(req)
-		}
-	}
-}
diff --git a/go/src/google.golang.org/cloud/internal/transport/dial.go b/go/src/google.golang.org/cloud/internal/transport/dial.go
index ae2baf9..a0f8bd9 100644
--- a/go/src/google.golang.org/cloud/internal/transport/dial.go
+++ b/go/src/google.golang.org/cloud/internal/transport/dial.go
@@ -69,40 +69,6 @@
 	return oauth2.NewClient(ctx, o.TokenSource), o.Endpoint, nil
 }
 
-// NewProtoClient returns a ProtoClient for communicating with a Google cloud service,
-// configured with the given ClientOptions.
-func NewProtoClient(ctx context.Context, opt ...cloud.ClientOption) (*ProtoClient, error) {
-	var o opts.DialOpt
-	for _, opt := range opt {
-		opt.Resolve(&o)
-	}
-	if o.GRPCClient != nil {
-		return nil, errors.New("unsupported GRPC base transport specified")
-	}
-	var client *http.Client
-	switch {
-	case o.HTTPClient != nil:
-		if o.TokenSource != nil {
-			return nil, errors.New("at most one of WithTokenSource or WithBaseHTTP may be provided")
-		}
-		client = o.HTTPClient
-	case o.TokenSource != nil:
-		client = oauth2.NewClient(ctx, o.TokenSource)
-	default:
-		var err error
-		client, err = google.DefaultClient(ctx, o.Scopes...)
-		if err != nil {
-			return nil, err
-		}
-	}
-
-	return &ProtoClient{
-		client:    client,
-		endpoint:  o.Endpoint,
-		userAgent: o.UserAgent,
-	}, nil
-}
-
 // DialGRPC returns a GRPC connection for use communicating with a Google cloud
 // service, configured with the given ClientOptions.
 func DialGRPC(ctx context.Context, opt ...cloud.ClientOption) (*grpc.ClientConn, error) {
diff --git a/go/src/google.golang.org/cloud/internal/transport/proto.go b/go/src/google.golang.org/cloud/internal/transport/proto.go
deleted file mode 100644
index 05b11cd..0000000
--- a/go/src/google.golang.org/cloud/internal/transport/proto.go
+++ /dev/null
@@ -1,80 +0,0 @@
-// Copyright 2015 Google Inc. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//      http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package transport
-
-import (
-	"bytes"
-	"io/ioutil"
-	"net/http"
-
-	"github.com/golang/protobuf/proto"
-	"golang.org/x/net/context"
-)
-
-type ProtoClient struct {
-	client    *http.Client
-	endpoint  string
-	userAgent string
-}
-
-func (c *ProtoClient) Call(ctx context.Context, method string, req, resp proto.Message) error {
-	payload, err := proto.Marshal(req)
-	if err != nil {
-		return err
-	}
-
-	httpReq, err := http.NewRequest("POST", c.endpoint+method, bytes.NewReader(payload))
-	if err != nil {
-		return err
-	}
-	httpReq.Header.Set("Content-Type", "application/x-protobuf")
-	if ua := c.userAgent; ua != "" {
-		httpReq.Header.Set("User-Agent", ua)
-	}
-
-	errc := make(chan error, 1)
-	cancel := makeReqCancel(httpReq)
-
-	go func() {
-		r, err := c.client.Do(httpReq)
-		if err != nil {
-			errc <- err
-			return
-		}
-		defer r.Body.Close()
-
-		body, err := ioutil.ReadAll(r.Body)
-		if r.StatusCode != http.StatusOK {
-			err = &ErrHTTP{
-				StatusCode: r.StatusCode,
-				Body:       body,
-				err:        err,
-			}
-		}
-		if err != nil {
-			errc <- err
-			return
-		}
-		errc <- proto.Unmarshal(body, resp)
-	}()
-
-	select {
-	case <-ctx.Done():
-		cancel(c.client.Transport) // Cancel the HTTP request.
-		return ctx.Err()
-	case err := <-errc:
-		return err
-	}
-}
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/README.md b/go/src/google.golang.org/cloud/logging/apiv2/README.md
new file mode 100644
index 0000000..b2af192
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/README.md
@@ -0,0 +1,11 @@
+Auto-generated logging v2 clients
+=================================
+
+This package includes auto-generated clients for the logging v2 API.
+
+Use the handwritten logging client (in the parent directory,
+google.golang.org/cloud/logging) in preference to this.
+
+This code is EXPERIMENTAL and subject to CHANGE AT ANY TIME.
+
+
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/config/client.go b/go/src/google.golang.org/cloud/logging/apiv2/config/client.go
new file mode 100644
index 0000000..72196b0
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/config/client.go
@@ -0,0 +1,304 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+package config
+
+import (
+	"errors"
+	"fmt"
+	"runtime"
+	"time"
+
+	gax "github.com/googleapis/gax-go"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/metadata"
+)
+
+const (
+	gapicNameVersion = "gapic/0.1.0"
+)
+
+var (
+	// Done is returned by iterators on successful completion.
+	Done = errors.New("iterator done")
+
+	projectPathTemplate = gax.MustCompilePathTemplate("projects/{project}")
+	sinkPathTemplate    = gax.MustCompilePathTemplate("projects/{project}/sinks/{sink}")
+)
+
+func defaultClientSettings() gax.ClientSettings {
+	return gax.ClientSettings{
+		AppName:    "gax",
+		AppVersion: gax.Version,
+		Endpoint:   "logging.googleapis.com:443",
+		Scopes: []string{
+			"https://www.googleapis.com/auth/logging.write",
+			"https://www.googleapis.com/auth/logging.admin",
+			"https://www.googleapis.com/auth/logging.read",
+			"https://www.googleapis.com/auth/cloud-platform.read-only",
+			"https://www.googleapis.com/auth/cloud-platform",
+		},
+		CallOptions: map[string][]gax.CallOption{
+			"ListSinks":  append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"GetSink":    append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"CreateSink": append([]gax.CallOption{withNonIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"UpdateSink": append([]gax.CallOption{withNonIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"DeleteSink": append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+		},
+	}
+}
+
+func withIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{
+		codes.DeadlineExceeded,
+		codes.Unavailable,
+	})
+}
+
+func withNonIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{})
+}
+
+func defaultRetryOptions() []gax.CallOption {
+	return []gax.CallOption{
+		gax.WithTimeout(45000 * time.Millisecond),
+		gax.WithDelayTimeoutSettings(100*time.Millisecond, 1000*time.Millisecond, 1.2),
+		gax.WithRPCTimeoutSettings(2000*time.Millisecond, 30000*time.Millisecond, 1.5),
+	}
+}
+
+// Client is a client for interacting with ConfigServiceV2.
+type Client struct {
+	// The connection to the service.
+	conn *grpc.ClientConn
+
+	// The gRPC API client.
+	client google_logging_v2.ConfigServiceV2Client
+
+	// The map from the method name to the default call options for the method of this service.
+	callOptions map[string][]gax.CallOption
+
+	// The metadata to be sent with each request.
+	metadata map[string][]string
+}
+
+// NewClient creates a new API service client.
+func NewClient(ctx context.Context, opts ...gax.ClientOption) (*Client, error) {
+	s := defaultClientSettings()
+	for _, opt := range opts {
+		opt.Resolve(&s)
+	}
+	conn, err := gax.DialGRPC(ctx, s)
+	if err != nil {
+		return nil, err
+	}
+	return &Client{
+		conn:        conn,
+		client:      google_logging_v2.NewConfigServiceV2Client(conn),
+		callOptions: s.CallOptions,
+		metadata: map[string][]string{
+			"x-goog-api-client": []string{fmt.Sprintf("%s/%s %s gax/%s go/%s", s.AppName, s.AppVersion, gapicNameVersion, gax.Version, runtime.Version())},
+		},
+	}, nil
+}
+
+// Close closes the connection to the API service. The user should invoke this when
+// the client is no longer required.
+func (c *Client) Close() error {
+	return c.conn.Close()
+}
+
+// Path templates.
+
+// ProjectPath returns the path for the project resource.
+func ProjectPath(project string) string {
+	path, err := projectPathTemplate.Render(map[string]string{
+		"project": project,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// SinkPath returns the path for the sink resource.
+func SinkPath(project string, sink string) string {
+	path, err := sinkPathTemplate.Render(map[string]string{
+		"project": project,
+		"sink":    sink,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// ListSinks lists sinks.
+func (c *Client) ListSinks(ctx context.Context, req *google_logging_v2.ListSinksRequest) *LogSinkIterator {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	it := &LogSinkIterator{}
+	it.apiCall = func() error {
+		if it.atLastPage {
+			return Done
+		}
+		var resp *google_logging_v2.ListSinksResponse
+		err := gax.Invoke(ctx, func(ctx context.Context) error {
+			var err error
+			req.PageToken = it.nextPageToken
+			req.PageSize = it.pageSize
+			resp, err = c.client.ListSinks(ctx, req)
+			return err
+		}, c.callOptions["ListSinks"]...)
+		if err != nil {
+			return err
+		}
+		if resp.NextPageToken == "" {
+			it.atLastPage = true
+		} else {
+			it.nextPageToken = resp.NextPageToken
+		}
+		it.items = resp.Sinks
+		return nil
+	}
+	return it
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// GetSink gets a sink.
+func (c *Client) GetSink(ctx context.Context, req *google_logging_v2.GetSinkRequest) (*google_logging_v2.LogSink, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogSink
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.GetSink(ctx, req)
+		return err
+	}, c.callOptions["GetSink"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// CreateSink creates a sink.
+func (c *Client) CreateSink(ctx context.Context, req *google_logging_v2.CreateSinkRequest) (*google_logging_v2.LogSink, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogSink
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.CreateSink(ctx, req)
+		return err
+	}, c.callOptions["CreateSink"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// UpdateSink creates or updates a sink.
+func (c *Client) UpdateSink(ctx context.Context, req *google_logging_v2.UpdateSinkRequest) (*google_logging_v2.LogSink, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogSink
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.UpdateSink(ctx, req)
+		return err
+	}, c.callOptions["UpdateSink"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// DeleteSink deletes a sink.
+func (c *Client) DeleteSink(ctx context.Context, req *google_logging_v2.DeleteSinkRequest) error {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		_, err = c.client.DeleteSink(ctx, req)
+		return err
+	}, c.callOptions["DeleteSink"]...)
+	return err
+}
+
+// Iterators.
+//
+
+// LogSinkIterator manages a stream of *google_logging_v2.LogSink.
+type LogSinkIterator struct {
+	// The current page data.
+	items         []*google_logging_v2.LogSink
+	atLastPage    bool
+	currentIndex  int
+	pageSize      int32
+	nextPageToken string
+	apiCall       func() error
+}
+
+// NextPage moves to the next page and updates its internal data.
+// It returns Done if no more pages exist.
+func (it *LogSinkIterator) NextPage() ([]*google_logging_v2.LogSink, error) {
+	err := it.apiCall()
+	if err != nil {
+		return nil, err
+	}
+	return it.items, err
+}
+
+// Next returns the next element in the stream. It returns Done at
+// the end of the stream.
+func (it *LogSinkIterator) Next() (*google_logging_v2.LogSink, error) {
+	for it.currentIndex >= len(it.items) {
+		_, err := it.NextPage()
+		if err != nil {
+			return nil, err
+		}
+		it.currentIndex = 0
+	}
+	result := it.items[it.currentIndex]
+	it.currentIndex++
+	return result, nil
+}
+
+// SetPageSize sets the maximum size of the next page to be
+// retrieved.
+func (it *LogSinkIterator) SetPageSize(pageSize int32) {
+	it.pageSize = pageSize
+}
+
+// SetPageToken sets the next page token to be retrieved. Note, it
+// does not retrieve the next page, or modify the cached page. If
+// Next is called, there is no guarantee that the result returned
+// will be from the next page until NextPage is called.
+func (it *LogSinkIterator) SetPageToken(token string) {
+	it.nextPageToken = token
+}
+
+// NextPageToken returns the next page token.
+func (it *LogSinkIterator) NextPageToken() string {
+	return it.nextPageToken
+}
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/config/client_test.go b/go/src/google.golang.org/cloud/logging/apiv2/config/client_test.go
new file mode 100644
index 0000000..eaf8539
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/config/client_test.go
@@ -0,0 +1,91 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+package config_test
+
+import (
+	gax "github.com/googleapis/gax-go"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/cloud/logging/apiv2/config"
+)
+
+func ExampleNewClient() {
+	ctx := context.Background()
+	opts := []gax.ClientOption{ /* Optional client parameters. */ }
+	c, err := config.NewClient(ctx, opts...)
+	_, _ = c, err // Handle error.
+}
+
+func ExampleClient_ListSinks() {
+	ctx := context.Background()
+	c, err := config.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.ListSinksRequest{ /* Data... */ }
+	it := c.ListSinks(ctx, req)
+	var resp *google_logging_v2.LogSink
+	for {
+		resp, err = it.Next()
+		if err != nil {
+			break
+		}
+	}
+	_ = resp
+}
+
+func ExampleClient_GetSink() {
+	ctx := context.Background()
+	c, err := config.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.GetSinkRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogSink
+	resp, err = c.GetSink(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_CreateSink() {
+	ctx := context.Background()
+	c, err := config.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.CreateSinkRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogSink
+	resp, err = c.CreateSink(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_UpdateSink() {
+	ctx := context.Background()
+	c, err := config.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.UpdateSinkRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogSink
+	resp, err = c.UpdateSink(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_DeleteSink() {
+	ctx := context.Background()
+	c, err := config.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.DeleteSinkRequest{ /* Data... */ }
+	err = c.DeleteSink(ctx, req)
+	_ = err // Handle error.
+}
diff --git a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go b/go/src/google.golang.org/cloud/logging/apiv2/config/doc.go
similarity index 61%
copy from go/src/google.golang.org/cloud/internal/transport/cancelreq.go
copy to go/src/google.golang.org/cloud/logging/apiv2/config/doc.go
index ddae71c..2a47e1e 100644
--- a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go
+++ b/go/src/google.golang.org/cloud/logging/apiv2/config/doc.go
@@ -1,4 +1,4 @@
-// Copyright 2015 Google Inc. All Rights Reserved.
+// Copyright 2016 Google Inc. All Rights Reserved.
 //
 // Licensed under the Apache License, Version 2.0 (the "License");
 // you may not use this file except in compliance with the License.
@@ -12,18 +12,8 @@
 // See the License for the specific language governing permissions and
 // limitations under the License.
 
-// +build go1.5
-
-package transport
-
-import "net/http"
-
-// makeReqCancel returns a closure that cancels the given http.Request
-// when called.
-func makeReqCancel(req *http.Request) func(http.RoundTripper) {
-	c := make(chan struct{})
-	req.Cancel = c
-	return func(http.RoundTripper) {
-		close(c)
-	}
-}
+// Package config is an experimental, auto-generated package for the logging
+// API.
+//
+// The Google Cloud Logging API lets you write log entries and manage your logs, log sinks and logs-based metrics.
+package config
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/logging/client.go b/go/src/google.golang.org/cloud/logging/apiv2/logging/client.go
new file mode 100644
index 0000000..f86818c
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/logging/client.go
@@ -0,0 +1,370 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+// Service for ingesting and querying logs.
+package logging
+
+import (
+	"errors"
+	"fmt"
+	"runtime"
+	"time"
+
+	gax "github.com/googleapis/gax-go"
+	google_api "github.com/googleapis/proto-client-go/api"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/metadata"
+)
+
+const (
+	gapicNameVersion = "gapic/0.1.0"
+)
+
+var (
+	// Done is returned by iterators on successful completion.
+	Done = errors.New("iterator done")
+
+	projectPathTemplate = gax.MustCompilePathTemplate("projects/{project}")
+	logPathTemplate     = gax.MustCompilePathTemplate("projects/{project}/logs/{log}")
+)
+
+func defaultClientSettings() gax.ClientSettings {
+	return gax.ClientSettings{
+		AppName:    "gax",
+		AppVersion: gax.Version,
+		Endpoint:   "logging.googleapis.com:443",
+		Scopes: []string{
+			"https://www.googleapis.com/auth/logging.write",
+			"https://www.googleapis.com/auth/logging.admin",
+			"https://www.googleapis.com/auth/logging.read",
+			"https://www.googleapis.com/auth/cloud-platform.read-only",
+			"https://www.googleapis.com/auth/cloud-platform",
+		},
+		CallOptions: map[string][]gax.CallOption{
+			"DeleteLog":                        append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"WriteLogEntries":                  append([]gax.CallOption{withNonIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"ListLogEntries":                   append([]gax.CallOption{withIdempotentRetryCodes()}, listRetryOptions()...),
+			"ListMonitoredResourceDescriptors": append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+		},
+	}
+}
+
+func withIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{
+		codes.DeadlineExceeded,
+		codes.Unavailable,
+	})
+}
+
+func withNonIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{})
+}
+
+func defaultRetryOptions() []gax.CallOption {
+	return []gax.CallOption{
+		gax.WithTimeout(45000 * time.Millisecond),
+		gax.WithDelayTimeoutSettings(100*time.Millisecond, 1000*time.Millisecond, 1.2),
+		gax.WithRPCTimeoutSettings(2000*time.Millisecond, 30000*time.Millisecond, 1.5),
+	}
+}
+
+func listRetryOptions() []gax.CallOption {
+	return []gax.CallOption{
+		gax.WithTimeout(45000 * time.Millisecond),
+		gax.WithDelayTimeoutSettings(100*time.Millisecond, 1000*time.Millisecond, 1.2),
+		gax.WithRPCTimeoutSettings(7000*time.Millisecond, 30000*time.Millisecond, 1.5),
+	}
+}
+
+// Client is a client for interacting with LoggingServiceV2.
+type Client struct {
+	// The connection to the service.
+	conn *grpc.ClientConn
+
+	// The gRPC API client.
+	client google_logging_v2.LoggingServiceV2Client
+
+	// The map from the method name to the default call options for the method of this service.
+	callOptions map[string][]gax.CallOption
+
+	// The metadata to be sent with each request.
+	metadata map[string][]string
+}
+
+// NewClient creates a new API service client.
+func NewClient(ctx context.Context, opts ...gax.ClientOption) (*Client, error) {
+	s := defaultClientSettings()
+	for _, opt := range opts {
+		opt.Resolve(&s)
+	}
+	conn, err := gax.DialGRPC(ctx, s)
+	if err != nil {
+		return nil, err
+	}
+	return &Client{
+		conn:        conn,
+		client:      google_logging_v2.NewLoggingServiceV2Client(conn),
+		callOptions: s.CallOptions,
+		metadata: map[string][]string{
+			"x-goog-api-client": []string{fmt.Sprintf("%s/%s %s gax/%s go/%s", s.AppName, s.AppVersion, gapicNameVersion, gax.Version, runtime.Version())},
+		},
+	}, nil
+}
+
+// Close closes the connection to the API service. The user should invoke this when
+// the client is no longer required.
+func (c *Client) Close() error {
+	return c.conn.Close()
+}
+
+// Path templates.
+
+// ProjectPath returns the path for the project resource.
+func ProjectPath(project string) string {
+	path, err := projectPathTemplate.Render(map[string]string{
+		"project": project,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// LogPath returns the path for the log resource.
+func LogPath(project string, log string) string {
+	path, err := logPathTemplate.Render(map[string]string{
+		"project": project,
+		"log":     log,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// DeleteLog deletes a log and all its log entries.
+// The log will reappear if it receives new entries.
+func (c *Client) DeleteLog(ctx context.Context, req *google_logging_v2.DeleteLogRequest) error {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		_, err = c.client.DeleteLog(ctx, req)
+		return err
+	}, c.callOptions["DeleteLog"]...)
+	return err
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// WriteLogEntries writes log entries to Cloud Logging.
+// All log entries in Cloud Logging are written by this method.
+func (c *Client) WriteLogEntries(ctx context.Context, req *google_logging_v2.WriteLogEntriesRequest) (*google_logging_v2.WriteLogEntriesResponse, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.WriteLogEntriesResponse
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.WriteLogEntries(ctx, req)
+		return err
+	}, c.callOptions["WriteLogEntries"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// ListLogEntries lists log entries.  Use this method to retrieve log entries from Cloud
+// Logging.  For ways to export log entries, see
+// [Exporting Logs](/logging/docs/export).
+func (c *Client) ListLogEntries(ctx context.Context, req *google_logging_v2.ListLogEntriesRequest) *LogEntryIterator {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	it := &LogEntryIterator{}
+	it.apiCall = func() error {
+		if it.atLastPage {
+			return Done
+		}
+		var resp *google_logging_v2.ListLogEntriesResponse
+		err := gax.Invoke(ctx, func(ctx context.Context) error {
+			var err error
+			req.PageToken = it.nextPageToken
+			req.PageSize = it.pageSize
+			resp, err = c.client.ListLogEntries(ctx, req)
+			return err
+		}, c.callOptions["ListLogEntries"]...)
+		if err != nil {
+			return err
+		}
+		if resp.NextPageToken == "" {
+			it.atLastPage = true
+		} else {
+			it.nextPageToken = resp.NextPageToken
+		}
+		it.items = resp.Entries
+		return nil
+	}
+	return it
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// ListMonitoredResourceDescriptors lists monitored resource descriptors that are used by Cloud Logging.
+func (c *Client) ListMonitoredResourceDescriptors(ctx context.Context, req *google_logging_v2.ListMonitoredResourceDescriptorsRequest) *MonitoredResourceDescriptorIterator {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	it := &MonitoredResourceDescriptorIterator{}
+	it.apiCall = func() error {
+		if it.atLastPage {
+			return Done
+		}
+		var resp *google_logging_v2.ListMonitoredResourceDescriptorsResponse
+		err := gax.Invoke(ctx, func(ctx context.Context) error {
+			var err error
+			req.PageToken = it.nextPageToken
+			req.PageSize = it.pageSize
+			resp, err = c.client.ListMonitoredResourceDescriptors(ctx, req)
+			return err
+		}, c.callOptions["ListMonitoredResourceDescriptors"]...)
+		if err != nil {
+			return err
+		}
+		if resp.NextPageToken == "" {
+			it.atLastPage = true
+		} else {
+			it.nextPageToken = resp.NextPageToken
+		}
+		it.items = resp.ResourceDescriptors
+		return nil
+	}
+	return it
+}
+
+// Iterators.
+//
+
+// LogEntryIterator manages a stream of *google_logging_v2.LogEntry.
+type LogEntryIterator struct {
+	// The current page data.
+	items         []*google_logging_v2.LogEntry
+	atLastPage    bool
+	currentIndex  int
+	pageSize      int32
+	nextPageToken string
+	apiCall       func() error
+}
+
+// NextPage moves to the next page and updates its internal data.
+// It returns Done if no more pages exist.
+func (it *LogEntryIterator) NextPage() ([]*google_logging_v2.LogEntry, error) {
+	err := it.apiCall()
+	if err != nil {
+		return nil, err
+	}
+	return it.items, err
+}
+
+// Next returns the next element in the stream. It returns Done at
+// the end of the stream.
+func (it *LogEntryIterator) Next() (*google_logging_v2.LogEntry, error) {
+	for it.currentIndex >= len(it.items) {
+		_, err := it.NextPage()
+		if err != nil {
+			return nil, err
+		}
+		it.currentIndex = 0
+	}
+	result := it.items[it.currentIndex]
+	it.currentIndex++
+	return result, nil
+}
+
+// SetPageSize sets the maximum size of the next page to be
+// retrieved.
+func (it *LogEntryIterator) SetPageSize(pageSize int32) {
+	it.pageSize = pageSize
+}
+
+// SetPageToken sets the next page token to be retrieved. Note, it
+// does not retrieve the next page, or modify the cached page. If
+// Next is called, there is no guarantee that the result returned
+// will be from the next page until NextPage is called.
+func (it *LogEntryIterator) SetPageToken(token string) {
+	it.nextPageToken = token
+}
+
+// NextPageToken returns the next page token.
+func (it *LogEntryIterator) NextPageToken() string {
+	return it.nextPageToken
+}
+
+// MonitoredResourceDescriptorIterator manages a stream of *google_api.MonitoredResourceDescriptor.
+type MonitoredResourceDescriptorIterator struct {
+	// The current page data.
+	items         []*google_api.MonitoredResourceDescriptor
+	atLastPage    bool
+	currentIndex  int
+	pageSize      int32
+	nextPageToken string
+	apiCall       func() error
+}
+
+// NextPage moves to the next page and updates its internal data.
+// It returns Done if no more pages exist.
+func (it *MonitoredResourceDescriptorIterator) NextPage() ([]*google_api.MonitoredResourceDescriptor, error) {
+	err := it.apiCall()
+	if err != nil {
+		return nil, err
+	}
+	return it.items, err
+}
+
+// Next returns the next element in the stream. It returns Done at
+// the end of the stream.
+func (it *MonitoredResourceDescriptorIterator) Next() (*google_api.MonitoredResourceDescriptor, error) {
+	for it.currentIndex >= len(it.items) {
+		_, err := it.NextPage()
+		if err != nil {
+			return nil, err
+		}
+		it.currentIndex = 0
+	}
+	result := it.items[it.currentIndex]
+	it.currentIndex++
+	return result, nil
+}
+
+// SetPageSize sets the maximum size of the next page to be
+// retrieved.
+func (it *MonitoredResourceDescriptorIterator) SetPageSize(pageSize int32) {
+	it.pageSize = pageSize
+}
+
+// SetPageToken sets the next page token to be retrieved. Note, it
+// does not retrieve the next page, or modify the cached page. If
+// Next is called, there is no guarantee that the result returned
+// will be from the next page until NextPage is called.
+func (it *MonitoredResourceDescriptorIterator) SetPageToken(token string) {
+	it.nextPageToken = token
+}
+
+// NextPageToken returns the next page token.
+func (it *MonitoredResourceDescriptorIterator) NextPageToken() string {
+	return it.nextPageToken
+}
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/logging/client_test.go b/go/src/google.golang.org/cloud/logging/apiv2/logging/client_test.go
new file mode 100644
index 0000000..268ccea
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/logging/client_test.go
@@ -0,0 +1,87 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+package logging_test
+
+import (
+	gax "github.com/googleapis/gax-go"
+	google_api "github.com/googleapis/proto-client-go/api"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/cloud/logging/apiv2/logging"
+)
+
+func ExampleNewClient() {
+	ctx := context.Background()
+	opts := []gax.ClientOption{ /* Optional client parameters. */ }
+	c, err := logging.NewClient(ctx, opts...)
+	_, _ = c, err // Handle error.
+}
+
+func ExampleClient_DeleteLog() {
+	ctx := context.Background()
+	c, err := logging.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.DeleteLogRequest{ /* Data... */ }
+	err = c.DeleteLog(ctx, req)
+	_ = err // Handle error.
+}
+
+func ExampleClient_WriteLogEntries() {
+	ctx := context.Background()
+	c, err := logging.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.WriteLogEntriesRequest{ /* Data... */ }
+	var resp *google_logging_v2.WriteLogEntriesResponse
+	resp, err = c.WriteLogEntries(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_ListLogEntries() {
+	ctx := context.Background()
+	c, err := logging.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.ListLogEntriesRequest{ /* Data... */ }
+	it := c.ListLogEntries(ctx, req)
+	var resp *google_logging_v2.LogEntry
+	for {
+		resp, err = it.Next()
+		if err != nil {
+			break
+		}
+	}
+	_ = resp
+}
+
+func ExampleClient_ListMonitoredResourceDescriptors() {
+	ctx := context.Background()
+	c, err := logging.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.ListMonitoredResourceDescriptorsRequest{ /* Data... */ }
+	it := c.ListMonitoredResourceDescriptors(ctx, req)
+	var resp *google_api.MonitoredResourceDescriptor
+	for {
+		resp, err = it.Next()
+		if err != nil {
+			break
+		}
+	}
+	_ = resp
+}
diff --git a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go b/go/src/google.golang.org/cloud/logging/apiv2/logging/doc.go
similarity index 61%
rename from go/src/google.golang.org/cloud/internal/transport/cancelreq.go
rename to go/src/google.golang.org/cloud/logging/apiv2/logging/doc.go
index ddae71c..edf9766 100644
--- a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go
+++ b/go/src/google.golang.org/cloud/logging/apiv2/logging/doc.go
@@ -1,4 +1,4 @@
-// Copyright 2015 Google Inc. All Rights Reserved.
+// Copyright 2016 Google Inc. All Rights Reserved.
 //
 // Licensed under the Apache License, Version 2.0 (the "License");
 // you may not use this file except in compliance with the License.
@@ -12,18 +12,8 @@
 // See the License for the specific language governing permissions and
 // limitations under the License.
 
-// +build go1.5
-
-package transport
-
-import "net/http"
-
-// makeReqCancel returns a closure that cancels the given http.Request
-// when called.
-func makeReqCancel(req *http.Request) func(http.RoundTripper) {
-	c := make(chan struct{})
-	req.Cancel = c
-	return func(http.RoundTripper) {
-		close(c)
-	}
-}
+// Package logging is an experimental, auto-generated package for the logging
+// API.
+//
+// The Google Cloud Logging API lets you write log entries and manage your logs, log sinks and logs-based metrics.
+package logging
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/metrics/client.go b/go/src/google.golang.org/cloud/logging/apiv2/metrics/client.go
new file mode 100644
index 0000000..e90fcd9
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/metrics/client.go
@@ -0,0 +1,304 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+package metrics
+
+import (
+	"errors"
+	"fmt"
+	"runtime"
+	"time"
+
+	gax "github.com/googleapis/gax-go"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/metadata"
+)
+
+const (
+	gapicNameVersion = "gapic/0.1.0"
+)
+
+var (
+	// Done is returned by iterators on successful completion.
+	Done = errors.New("iterator done")
+
+	projectPathTemplate = gax.MustCompilePathTemplate("projects/{project}")
+	metricPathTemplate  = gax.MustCompilePathTemplate("projects/{project}/metrics/{metric}")
+)
+
+func defaultClientSettings() gax.ClientSettings {
+	return gax.ClientSettings{
+		AppName:    "gax",
+		AppVersion: gax.Version,
+		Endpoint:   "logging.googleapis.com:443",
+		Scopes: []string{
+			"https://www.googleapis.com/auth/logging.write",
+			"https://www.googleapis.com/auth/logging.admin",
+			"https://www.googleapis.com/auth/logging.read",
+			"https://www.googleapis.com/auth/cloud-platform.read-only",
+			"https://www.googleapis.com/auth/cloud-platform",
+		},
+		CallOptions: map[string][]gax.CallOption{
+			"ListLogMetrics":  append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"GetLogMetric":    append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"CreateLogMetric": append([]gax.CallOption{withNonIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"UpdateLogMetric": append([]gax.CallOption{withNonIdempotentRetryCodes()}, defaultRetryOptions()...),
+			"DeleteLogMetric": append([]gax.CallOption{withIdempotentRetryCodes()}, defaultRetryOptions()...),
+		},
+	}
+}
+
+func withIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{
+		codes.DeadlineExceeded,
+		codes.Unavailable,
+	})
+}
+
+func withNonIdempotentRetryCodes() gax.CallOption {
+	return gax.WithRetryCodes([]codes.Code{})
+}
+
+func defaultRetryOptions() []gax.CallOption {
+	return []gax.CallOption{
+		gax.WithTimeout(45000 * time.Millisecond),
+		gax.WithDelayTimeoutSettings(100*time.Millisecond, 1000*time.Millisecond, 1.2),
+		gax.WithRPCTimeoutSettings(2000*time.Millisecond, 30000*time.Millisecond, 1.5),
+	}
+}
+
+// Client is a client for interacting with MetricsServiceV2.
+type Client struct {
+	// The connection to the service.
+	conn *grpc.ClientConn
+
+	// The gRPC API client.
+	client google_logging_v2.MetricsServiceV2Client
+
+	// The map from the method name to the default call options for the method of this service.
+	callOptions map[string][]gax.CallOption
+
+	// The metadata to be sent with each request.
+	metadata map[string][]string
+}
+
+// NewClient creates a new API service client.
+func NewClient(ctx context.Context, opts ...gax.ClientOption) (*Client, error) {
+	s := defaultClientSettings()
+	for _, opt := range opts {
+		opt.Resolve(&s)
+	}
+	conn, err := gax.DialGRPC(ctx, s)
+	if err != nil {
+		return nil, err
+	}
+	return &Client{
+		conn:        conn,
+		client:      google_logging_v2.NewMetricsServiceV2Client(conn),
+		callOptions: s.CallOptions,
+		metadata: map[string][]string{
+			"x-goog-api-client": []string{fmt.Sprintf("%s/%s %s gax/%s go/%s", s.AppName, s.AppVersion, gapicNameVersion, gax.Version, runtime.Version())},
+		},
+	}, nil
+}
+
+// Close closes the connection to the API service. The user should invoke this when
+// the client is no longer required.
+func (c *Client) Close() error {
+	return c.conn.Close()
+}
+
+// Path templates.
+
+// ProjectPath returns the path for the project resource.
+func ProjectPath(project string) string {
+	path, err := projectPathTemplate.Render(map[string]string{
+		"project": project,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// MetricPath returns the path for the metric resource.
+func MetricPath(project string, metric string) string {
+	path, err := metricPathTemplate.Render(map[string]string{
+		"project": project,
+		"metric":  metric,
+	})
+	if err != nil {
+		panic(err)
+	}
+	return path
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// ListLogMetrics lists logs-based metrics.
+func (c *Client) ListLogMetrics(ctx context.Context, req *google_logging_v2.ListLogMetricsRequest) *LogMetricIterator {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	it := &LogMetricIterator{}
+	it.apiCall = func() error {
+		if it.atLastPage {
+			return Done
+		}
+		var resp *google_logging_v2.ListLogMetricsResponse
+		err := gax.Invoke(ctx, func(ctx context.Context) error {
+			var err error
+			req.PageToken = it.nextPageToken
+			req.PageSize = it.pageSize
+			resp, err = c.client.ListLogMetrics(ctx, req)
+			return err
+		}, c.callOptions["ListLogMetrics"]...)
+		if err != nil {
+			return err
+		}
+		if resp.NextPageToken == "" {
+			it.atLastPage = true
+		} else {
+			it.nextPageToken = resp.NextPageToken
+		}
+		it.items = resp.Metrics
+		return nil
+	}
+	return it
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// GetLogMetric gets a logs-based metric.
+func (c *Client) GetLogMetric(ctx context.Context, req *google_logging_v2.GetLogMetricRequest) (*google_logging_v2.LogMetric, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogMetric
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.GetLogMetric(ctx, req)
+		return err
+	}, c.callOptions["GetLogMetric"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// CreateLogMetric creates a logs-based metric.
+func (c *Client) CreateLogMetric(ctx context.Context, req *google_logging_v2.CreateLogMetricRequest) (*google_logging_v2.LogMetric, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogMetric
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.CreateLogMetric(ctx, req)
+		return err
+	}, c.callOptions["CreateLogMetric"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// UpdateLogMetric creates or updates a logs-based metric.
+func (c *Client) UpdateLogMetric(ctx context.Context, req *google_logging_v2.UpdateLogMetricRequest) (*google_logging_v2.LogMetric, error) {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	var resp *google_logging_v2.LogMetric
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		resp, err = c.client.UpdateLogMetric(ctx, req)
+		return err
+	}, c.callOptions["UpdateLogMetric"]...)
+	if err != nil {
+		return nil, err
+	}
+	return resp, nil
+}
+
+// AUTO-GENERATED DOCUMENTATION AND METHOD -- see instructions at the top of the file for editing.
+
+// DeleteLogMetric deletes a logs-based metric.
+func (c *Client) DeleteLogMetric(ctx context.Context, req *google_logging_v2.DeleteLogMetricRequest) error {
+	ctx = metadata.NewContext(ctx, c.metadata)
+	err := gax.Invoke(ctx, func(ctx context.Context) error {
+		var err error
+		_, err = c.client.DeleteLogMetric(ctx, req)
+		return err
+	}, c.callOptions["DeleteLogMetric"]...)
+	return err
+}
+
+// Iterators.
+//
+
+// LogMetricIterator manages a stream of *google_logging_v2.LogMetric.
+type LogMetricIterator struct {
+	// The current page data.
+	items         []*google_logging_v2.LogMetric
+	atLastPage    bool
+	currentIndex  int
+	pageSize      int32
+	nextPageToken string
+	apiCall       func() error
+}
+
+// NextPage moves to the next page and updates its internal data.
+// It returns Done if no more pages exist.
+func (it *LogMetricIterator) NextPage() ([]*google_logging_v2.LogMetric, error) {
+	err := it.apiCall()
+	if err != nil {
+		return nil, err
+	}
+	return it.items, err
+}
+
+// Next returns the next element in the stream. It returns Done at
+// the end of the stream.
+func (it *LogMetricIterator) Next() (*google_logging_v2.LogMetric, error) {
+	for it.currentIndex >= len(it.items) {
+		_, err := it.NextPage()
+		if err != nil {
+			return nil, err
+		}
+		it.currentIndex = 0
+	}
+	result := it.items[it.currentIndex]
+	it.currentIndex++
+	return result, nil
+}
+
+// SetPageSize sets the maximum size of the next page to be
+// retrieved.
+func (it *LogMetricIterator) SetPageSize(pageSize int32) {
+	it.pageSize = pageSize
+}
+
+// SetPageToken sets the next page token to be retrieved. Note, it
+// does not retrieve the next page, or modify the cached page. If
+// Next is called, there is no guarantee that the result returned
+// will be from the next page until NextPage is called.
+func (it *LogMetricIterator) SetPageToken(token string) {
+	it.nextPageToken = token
+}
+
+// NextPageToken returns the next page token.
+func (it *LogMetricIterator) NextPageToken() string {
+	return it.nextPageToken
+}
diff --git a/go/src/google.golang.org/cloud/logging/apiv2/metrics/client_test.go b/go/src/google.golang.org/cloud/logging/apiv2/metrics/client_test.go
new file mode 100644
index 0000000..125a177
--- /dev/null
+++ b/go/src/google.golang.org/cloud/logging/apiv2/metrics/client_test.go
@@ -0,0 +1,91 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// AUTO-GENERATED DOCUMENTATION AND SERVICE
+
+package metrics_test
+
+import (
+	gax "github.com/googleapis/gax-go"
+	google_logging_v2 "github.com/googleapis/proto-client-go/logging/v2"
+	"golang.org/x/net/context"
+	"google.golang.org/cloud/logging/apiv2/metrics"
+)
+
+func ExampleNewClient() {
+	ctx := context.Background()
+	opts := []gax.ClientOption{ /* Optional client parameters. */ }
+	c, err := metrics.NewClient(ctx, opts...)
+	_, _ = c, err // Handle error.
+}
+
+func ExampleClient_ListLogMetrics() {
+	ctx := context.Background()
+	c, err := metrics.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.ListLogMetricsRequest{ /* Data... */ }
+	it := c.ListLogMetrics(ctx, req)
+	var resp *google_logging_v2.LogMetric
+	for {
+		resp, err = it.Next()
+		if err != nil {
+			break
+		}
+	}
+	_ = resp
+}
+
+func ExampleClient_GetLogMetric() {
+	ctx := context.Background()
+	c, err := metrics.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.GetLogMetricRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogMetric
+	resp, err = c.GetLogMetric(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_CreateLogMetric() {
+	ctx := context.Background()
+	c, err := metrics.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.CreateLogMetricRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogMetric
+	resp, err = c.CreateLogMetric(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_UpdateLogMetric() {
+	ctx := context.Background()
+	c, err := metrics.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.UpdateLogMetricRequest{ /* Data... */ }
+	var resp *google_logging_v2.LogMetric
+	resp, err = c.UpdateLogMetric(ctx, req)
+	_, _ = resp, err // Handle error.
+}
+
+func ExampleClient_DeleteLogMetric() {
+	ctx := context.Background()
+	c, err := metrics.NewClient(ctx)
+	_ = err // Handle error.
+
+	req := &google_logging_v2.DeleteLogMetricRequest{ /* Data... */ }
+	err = c.DeleteLogMetric(ctx, req)
+	_ = err // Handle error.
+}
diff --git a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go b/go/src/google.golang.org/cloud/logging/apiv2/metrics/doc.go
similarity index 61%
copy from go/src/google.golang.org/cloud/internal/transport/cancelreq.go
copy to go/src/google.golang.org/cloud/logging/apiv2/metrics/doc.go
index ddae71c..c08b0cb 100644
--- a/go/src/google.golang.org/cloud/internal/transport/cancelreq.go
+++ b/go/src/google.golang.org/cloud/logging/apiv2/metrics/doc.go
@@ -1,4 +1,4 @@
-// Copyright 2015 Google Inc. All Rights Reserved.
+// Copyright 2016 Google Inc. All Rights Reserved.
 //
 // Licensed under the Apache License, Version 2.0 (the "License");
 // you may not use this file except in compliance with the License.
@@ -12,18 +12,8 @@
 // See the License for the specific language governing permissions and
 // limitations under the License.
 
-// +build go1.5
-
-package transport
-
-import "net/http"
-
-// makeReqCancel returns a closure that cancels the given http.Request
-// when called.
-func makeReqCancel(req *http.Request) func(http.RoundTripper) {
-	c := make(chan struct{})
-	req.Cancel = c
-	return func(http.RoundTripper) {
-		close(c)
-	}
-}
+// Package metrics is an experimental, auto-generated package for the logging
+// API.
+//
+// The Google Cloud Logging API lets you write log entries and manage your logs, log sinks and logs-based metrics.
+package metrics
diff --git a/go/src/google.golang.org/cloud/pubsub/acker.go b/go/src/google.golang.org/cloud/pubsub/acker.go
index 9960840..088ed79 100644
--- a/go/src/google.golang.org/cloud/pubsub/acker.go
+++ b/go/src/google.golang.org/cloud/pubsub/acker.go
@@ -78,7 +78,7 @@
 
 // acker acks messages in batches.
 type acker struct {
-	Client  *Client
+	s       service
 	Ctx     context.Context  // The context to use when acknowledging messages.
 	Sub     string           // The full name of the subscription.
 	AckTick <-chan time.Time // AckTick supplies the frequency with which to make ack requests.
@@ -135,25 +135,23 @@
 	a.wg.Wait()
 }
 
-const maxAckRetries = 1
+const maxAckAttempts = 2
 
 // ack acknowledges the supplied ackIDs.
 // After the acknowledgement request has completed (regardless of its success
 // or failure), ids will be passed to a.Notify.
 func (a *acker) ack(ids []string) {
-	var retries int
-	head, tail := a.Client.s.splitAckIDs(ids)
+	head, tail := a.s.splitAckIDs(ids)
 	for len(head) > 0 {
-		err := a.Client.s.acknowledge(a.Ctx, a.Sub, head)
-		if err != nil && retries < maxAckRetries {
-			// TODO(mcgreevy): more sophisticated retry on failure.
-			// NOTE: it is not incorrect to drop acks if we decide not to retry; the messages
-			//  will be redelievered, but this is a documented behaviour of the API.
-			retries += 1
-			continue
+		for i := 0; i < maxAckAttempts; i++ {
+			if a.s.acknowledge(a.Ctx, a.Sub, head) == nil {
+				break
+			}
 		}
-		retries = 0
-		head, tail = a.Client.s.splitAckIDs(tail)
+		// NOTE: if retry gives up and returns an error, we simply drop
+		// those ack IDs. The messages will be redelivered and this is
+		// a documented behaviour of the API.
+		head, tail = a.s.splitAckIDs(tail)
 	}
 	for _, id := range ids {
 		a.Notify(id)
diff --git a/go/src/google.golang.org/cloud/pubsub/acker_test.go b/go/src/google.golang.org/cloud/pubsub/acker_test.go
index 4565e7e..9e283ba 100644
--- a/go/src/google.golang.org/cloud/pubsub/acker_test.go
+++ b/go/src/google.golang.org/cloud/pubsub/acker_test.go
@@ -27,11 +27,10 @@
 func TestAcker(t *testing.T) {
 	tick := make(chan time.Time)
 	s := &testService{acknowledgeCalled: make(chan acknowledgeCall)}
-	c := &Client{projectID: "projid", s: s}
 
 	processed := make(chan string, 10)
 	acker := &acker{
-		Client:  c,
+		s:       s,
 		Ctx:     context.Background(),
 		Sub:     "subname",
 		AckTick: tick,
@@ -78,11 +77,10 @@
 func TestAckerFastMode(t *testing.T) {
 	tick := make(chan time.Time)
 	s := &testService{acknowledgeCalled: make(chan acknowledgeCall)}
-	c := &Client{projectID: "projid", s: s}
 
 	processed := make(chan string, 10)
 	acker := &acker{
-		Client:  c,
+		s:       s,
 		Ctx:     context.Background(),
 		Sub:     "subname",
 		AckTick: tick,
@@ -129,11 +127,10 @@
 func TestAckerStop(t *testing.T) {
 	tick := make(chan time.Time)
 	s := &testService{acknowledgeCalled: make(chan acknowledgeCall, 10)}
-	c := &Client{projectID: "projid", s: s}
 
 	processed := make(chan string)
 	acker := &acker{
-		Client:  c,
+		s:       s,
 		Ctx:     context.Background(),
 		Sub:     "subname",
 		AckTick: tick,
@@ -249,9 +246,8 @@
 			calls: tc.calls,
 		}
 
-		c := &Client{projectID: "projid", s: s}
 		acker := &acker{
-			Client: c,
+			s:      s,
 			Ctx:    context.Background(),
 			Sub:    "subname",
 			Notify: func(string) {},
diff --git a/go/src/google.golang.org/cloud/pubsub/doc.go b/go/src/google.golang.org/cloud/pubsub/doc.go
new file mode 100644
index 0000000..8d9923f
--- /dev/null
+++ b/go/src/google.golang.org/cloud/pubsub/doc.go
@@ -0,0 +1,115 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+/*
+Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub
+messages, hiding the the details of the underlying server RPCs.  Google Cloud
+Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders
+and receivers.
+
+Note: This package is experimental and may make backwards-incompatible changes.
+
+More information about Google Cloud Pub/Sub is available at
+https://cloud.google.com/pubsub/docs
+
+Publishing
+
+Google Cloud Pub/Sub messages are published to topics. Topics may be created
+using the pubsub package like so:
+
+ topic, err := client.NewTopic(context.Background(), "topic-name")
+
+Messages may then be published to a topic:
+
+ msgIDs, err := topic.Publish(ctx, &pubsub.Message{
+	Data: []byte("payload"),
+ })
+
+Receiving
+
+To receive messages published to a topic, clients create subscriptions
+to the topic. There may be more than one subscription per topic; each message
+that is published to the topic will be delivered to all of its subscriptions.
+
+Subsciptions may be created like so:
+
+ sub, err := client.NewSubscription(context.Background(), "sub-name", topic, 0, nil)
+
+Messages are then consumed from a subscription via an iterator:
+
+ // Construct the iterator
+ it, err := sub.Pull(context.Background())
+ if err != nil {
+	// handle err ...
+ }
+ defer it.Stop()
+
+ // Consume N messages
+ for i := 0; i < N; i++ {
+ 	msg, err := it.Next()
+ 	if err == pubsub.Done {
+ 		break
+ 	}
+ 	if err != nil {
+ 		// handle err ...
+ 		break
+ 	}
+
+ 	log.Print("got message: ", string(msg.Data))
+ 	msg.Done(true)
+ }
+
+The message iterator returns messages one at a time, fetching batches of
+messages behind the scenes as needed. Once client code has processed the
+message, it must call Message.Done, otherwise the message will eventually be
+redelivered. For more information and configuration options, see "Deadlines"
+below.
+
+Note: It is possible for Messages to be redelivered, even if Message.Done has
+been called. Client code must be robust to multiple deliveries of messages.
+
+Deadlines
+
+The default pubsub deadlines are suitable for most use cases, but may be
+overridden.  This section describes the tradeoffs that should be considered
+when overriding the defaults.
+
+Behind the scenes, each message returned by the Pub/Sub server has an
+associated lease, known as an "ACK deadline".
+Unless a message is acknowledged within the ACK deadline, or the client requests that
+the ACK deadline be extended, the message will become elegible for redelivery.
+As a convenience, the pubsub package will automatically extend deadlines until
+either:
+ * Message.Done is called, or
+ * the "MaxExtension" period elapses from the time the message is fetched from the server.
+
+The initial ACK deadline given to each messages defaults to 10 seconds, but may
+be overridden during subscription creation.  Selecting an ACK deadline is a
+tradeoff between message redelivery latency and RPC volume. If the pubsub
+package fails to acknowledge or extend a message (e.g. due to unexpected
+termination of the process), a shorter ACK deadline will generally result in
+faster message redelivery by the Pub/Sub system. However, a short ACK deadline
+may also increase the number of deadline extension RPCs that the pubsub package
+sends to the server.
+
+The default max extension period is DefaultMaxExtension, and can be overridden
+by passing a MaxExtension option to Subscription.Pull. Selecting a max
+extension period is a tradeoff between the speed at which client code must
+process messages, and the redelivery delay if messages fail to be acknowledged
+(e.g. because client code neglects to do so).  Using a large MaxExtension
+increases the available time for client code to process messages.  However, if
+the client code neglects to call Message.Done, a large MaxExtension will
+increase the delay before the message is redelivered.
+*/
+package pubsub // import "google.golang.org/cloud/pubsub"
diff --git a/go/src/google.golang.org/cloud/pubsub/endtoend_test.go b/go/src/google.golang.org/cloud/pubsub/endtoend_test.go
new file mode 100644
index 0000000..65061de
--- /dev/null
+++ b/go/src/google.golang.org/cloud/pubsub/endtoend_test.go
@@ -0,0 +1,323 @@
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pubsub
+
+import (
+	"fmt"
+	"math/rand"
+	"reflect"
+	"sync"
+	"testing"
+	"time"
+
+	"golang.org/x/net/context"
+
+	"google.golang.org/cloud"
+	"google.golang.org/cloud/internal/testutil"
+)
+
+const timeout = time.Minute * 10
+const ackDeadline = time.Second * 10
+
+const batchSize = 100
+const batches = 100
+
+// messageCounter keeps track of how many times a given message has been received.
+type messageCounter struct {
+	mu     sync.Mutex
+	counts map[string]int
+	// A value is sent to recv each time Inc is called.
+	recv chan struct{}
+}
+
+func (mc *messageCounter) Inc(msgID string) {
+	mc.mu.Lock()
+	mc.counts[msgID] += 1
+	mc.mu.Unlock()
+	mc.recv <- struct{}{}
+}
+
+// process pulls messages from an iterator and records them in mc.
+func process(t *testing.T, it *Iterator, mc *messageCounter) {
+	for {
+		m, err := it.Next()
+		if err == Done {
+			return
+		}
+
+		if err != nil {
+			t.Errorf("unexpected err from iterator: %v", err)
+			return
+		}
+		mc.Inc(m.ID)
+		// Simulate time taken to process m, while continuing to process more messages.
+		go func() {
+			// Some messages will need to have their ack deadline extended due to this delay.
+			delay := rand.Intn(int(ackDeadline * 3))
+			time.After(time.Duration(delay))
+			m.Done(true)
+		}()
+	}
+}
+
+// newIter constructs a new Iterator.
+func newIter(t *testing.T, ctx context.Context, sub *Subscription) *Iterator {
+	it, err := sub.Pull(ctx)
+	if err != nil {
+		t.Fatalf("error constructing iterator: %v", err)
+	}
+	return it
+}
+
+// launchIter launches a number of goroutines to pull from the supplied Iterator.
+func launchIter(t *testing.T, ctx context.Context, it *Iterator, mc *messageCounter, n int, wg *sync.WaitGroup) {
+	for j := 0; j < n; j++ {
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			process(t, it, mc)
+		}()
+	}
+}
+
+// iteratorLifetime controls how long iterators live for before they are stopped.
+type iteratorLifetimes interface {
+	// lifetimeChan should be called when an iterator is started. The
+	// returned channel will send when the iterator should be stopped.
+	lifetimeChan() <-chan time.Time
+}
+
+var immortal = &explicitLifetimes{}
+
+// explicitLifetimes implements iteratorLifetime with hard-coded lifetimes, falling back
+// to indefinite lifetimes when no explicit lifetimes remain.
+type explicitLifetimes struct {
+	mu        sync.Mutex
+	lifetimes []time.Duration
+}
+
+func (el *explicitLifetimes) lifetimeChan() <-chan time.Time {
+	el.mu.Lock()
+	defer el.mu.Unlock()
+	if len(el.lifetimes) == 0 {
+		return nil
+	}
+	lifetime := el.lifetimes[0]
+	el.lifetimes = el.lifetimes[1:]
+	return time.After(lifetime)
+}
+
+// consumer consumes messages according to its configuration.
+type consumer struct {
+	// How many goroutines should pull from the subscription.
+	iteratorsInFlight int
+	// How many goroutines should pull from each iterator.
+	concurrencyPerIterator int
+
+	lifetimes iteratorLifetimes
+}
+
+// consume reads messages from a subscription, and keeps track of what it receives in mc.
+// After consume returns, the caller should wait on wg to ensure that no more updates to mc will be made.
+func (c *consumer) consume(t *testing.T, ctx context.Context, sub *Subscription, mc *messageCounter, wg *sync.WaitGroup, stop <-chan struct{}) {
+	for i := 0; i < c.iteratorsInFlight; i++ {
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			for {
+				it := newIter(t, ctx, sub)
+				launchIter(t, ctx, it, mc, c.concurrencyPerIterator, wg)
+
+				select {
+				case <-c.lifetimes.lifetimeChan():
+					it.Stop()
+				case <-stop:
+					it.Stop()
+					return
+				}
+			}
+
+		}()
+	}
+}
+
+// publish publishes many messages to topic, and returns the published message ids.
+func publish(t *testing.T, ctx context.Context, topic *Topic) []string {
+	var published []string
+	msgs := make([]*Message, batchSize)
+	for i := 0; i < batches; i++ {
+		for j := 0; j < batchSize; j++ {
+			text := fmt.Sprintf("msg %02d-%02d", i, j)
+			msgs[j] = &Message{Data: []byte(text)}
+		}
+		ids, err := topic.Publish(ctx, msgs...)
+		if err != nil {
+			t.Errorf("Publish error: %v", err)
+		}
+		published = append(published, ids...)
+	}
+	return published
+}
+
+// diff returns counts of the differences between got and want.
+func diff(got, want map[string]int) map[string]int {
+	ids := make(map[string]struct{})
+	for k, _ := range got {
+		ids[k] = struct{}{}
+	}
+	for k, _ := range want {
+		ids[k] = struct{}{}
+	}
+
+	gotWantCount := make(map[string]int)
+	for k, _ := range ids {
+		if got[k] == want[k] {
+			continue
+		}
+		desc := fmt.Sprintf("<got: %v ; want: %v>", got[k], want[k])
+		gotWantCount[desc] += 1
+	}
+	return gotWantCount
+}
+
+// TestEndToEnd pumps many messages into a topic and tests that they are all delivered to each subscription for the topic.
+// It also tests that messages are not unexpectedly redelivered.
+func TestEndToEnd(t *testing.T) {
+	if testing.Short() {
+		t.Skip("Integration tests skipped in short mode")
+	}
+	ctx := context.Background()
+	ts := testutil.TokenSource(ctx, ScopePubSub, ScopeCloudPlatform)
+	if ts == nil {
+		t.Skip("Integration tests skipped. See CONTRIBUTING.md for details")
+	}
+
+	now := time.Now()
+	topicName := fmt.Sprintf("endtoend-%d", now.Unix())
+	subPrefix := fmt.Sprintf("endtoend-%d", now.Unix())
+
+	client, err := NewClient(ctx, testutil.ProjID(), cloud.WithTokenSource(ts))
+	if err != nil {
+		t.Fatalf("Creating client error: %v", err)
+	}
+
+	var topic *Topic
+	if topic, err = client.NewTopic(ctx, topicName); err != nil {
+		t.Fatalf("CreateTopic error: %v", err)
+	}
+	defer topic.Delete(ctx)
+
+	// Three subscriptions to the same topic.
+	var subA, subB, subC *Subscription
+	if subA, err = client.NewSubscription(ctx, subPrefix+"-a", topic, ackDeadline, nil); err != nil {
+		t.Fatalf("CreateSub error: %v", err)
+	}
+	defer subA.Delete(ctx)
+
+	if subB, err = client.NewSubscription(ctx, subPrefix+"-b", topic, ackDeadline, nil); err != nil {
+		t.Fatalf("CreateSub error: %v", err)
+	}
+	defer subB.Delete(ctx)
+
+	if subC, err = client.NewSubscription(ctx, subPrefix+"-c", topic, ackDeadline, nil); err != nil {
+		t.Fatalf("CreateSub error: %v", err)
+	}
+	defer subC.Delete(ctx)
+
+	expectedCounts := make(map[string]int)
+	for _, id := range publish(t, ctx, topic) {
+		expectedCounts[id] = 1
+	}
+
+	// recv provides an indication that messages are still arriving.
+	recv := make(chan struct{})
+
+	// Keep track of the number of times each message (by message id) was
+	// seen from each subscription.
+	mcA := &messageCounter{counts: make(map[string]int), recv: recv}
+	mcB := &messageCounter{counts: make(map[string]int), recv: recv}
+	mcC := &messageCounter{counts: make(map[string]int), recv: recv}
+
+	stopC := make(chan struct{})
+
+	// We have three subscriptions to our topic.
+	// Each subscription will get a copy of each pulished message.
+	//
+	// subA has just one iterator, while subB has two. The subB iterators
+	// will each process roughly half of the messages for subB. All of
+	// these iterators live until all messages have been consumed.  subC is
+	// processed by a series of short-lived iterators.
+
+	var wg sync.WaitGroup
+
+	con := &consumer{
+		concurrencyPerIterator: 1,
+		iteratorsInFlight:      2,
+		lifetimes:              immortal,
+	}
+	con.consume(t, ctx, subA, mcA, &wg, stopC)
+
+	con = &consumer{
+		concurrencyPerIterator: 1,
+		iteratorsInFlight:      2,
+		lifetimes:              immortal,
+	}
+	con.consume(t, ctx, subB, mcB, &wg, stopC)
+
+	con = &consumer{
+		concurrencyPerIterator: 1,
+		iteratorsInFlight:      2,
+		lifetimes: &explicitLifetimes{
+			lifetimes: []time.Duration{ackDeadline, ackDeadline, ackDeadline / 2, ackDeadline / 2},
+		},
+	}
+	con.consume(t, ctx, subC, mcC, &wg, stopC)
+
+	go func() {
+		timeoutC := time.After(timeout)
+		// Every time this ticker ticks, we will check if we have received any
+		// messages since the last time it ticked.  We check less frequently
+		// than the ack deadline, so that we can detect if messages are
+		// redelivered after having their ack deadline extended.
+		checkQuiescence := time.NewTicker(ackDeadline * 3)
+		defer checkQuiescence.Stop()
+
+		var received bool
+		for {
+			select {
+			case <-recv:
+				received = true
+			case <-checkQuiescence.C:
+				if received {
+					received = false
+				} else {
+					close(stopC)
+					return
+				}
+			case <-timeoutC:
+				t.Errorf("timed out")
+				close(stopC)
+				return
+			}
+		}
+	}()
+	wg.Wait()
+
+	for _, mc := range []*messageCounter{mcA, mcB, mcC} {
+		if got, want := mc.counts, expectedCounts; !reflect.DeepEqual(got, want) {
+			t.Errorf("message counts: %v\n", diff(got, want))
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/integration_test.go b/go/src/google.golang.org/cloud/pubsub/integration_test.go
index 5c8853a..c212baa 100644
--- a/go/src/google.golang.org/cloud/pubsub/integration_test.go
+++ b/go/src/google.golang.org/cloud/pubsub/integration_test.go
@@ -61,12 +61,12 @@
 		t.Fatalf("Creating client error: %v", err)
 	}
 
-	var topic *TopicHandle
+	var topic *Topic
 	if topic, err = client.NewTopic(ctx, topicName); err != nil {
 		t.Errorf("CreateTopic error: %v", err)
 	}
 
-	var sub *SubscriptionHandle
+	var sub *Subscription
 	if sub, err = client.NewSubscription(ctx, subName, topic, 0, nil); err != nil {
 		t.Errorf("CreateSub error: %v", err)
 	}
@@ -125,7 +125,7 @@
 	for i := 0; i < len(want); i++ {
 		m, err := it.Next()
 		if err != nil {
-			t.Fatalf("error getting next message:", err) // TODO: add deadline to context.
+			t.Fatalf("error getting next message:", err)
 		}
 		md := extractMessageData(m)
 		got[md.ID] = md
diff --git a/go/src/google.golang.org/cloud/pubsub/iterator.go b/go/src/google.golang.org/cloud/pubsub/iterator.go
index dd9bb3a..b77da26 100644
--- a/go/src/google.golang.org/cloud/pubsub/iterator.go
+++ b/go/src/google.golang.org/cloud/pubsub/iterator.go
@@ -15,106 +15,98 @@
 package pubsub
 
 import (
-	"io"
+	"errors"
 	"sync"
 	"time"
 
 	"golang.org/x/net/context"
 )
 
+// Done is returned when an iteration is complete.
+var Done = errors.New("no more messages")
+
 type Iterator struct {
-	// The name of the subscription that the Iterator is pulling messages from.
-	sub string
-	// The context to use for acking messages and extending message deadlines.
-	ctx context.Context
-
-	c *Client
-
-	// Controls how often we send an ack deadline extension request.
+	// kaTicker controls how often we send an ack deadline extension request.
 	kaTicker *time.Ticker
-	// Controls how often we acknowledge a batch of messages.
+	// ackTicker controls how often we acknowledge a batch of messages.
 	ackTicker *time.Ticker
 
-	ka     keepAlive
-	acker  acker
-	puller puller
+	ka     *keepAlive
+	acker  *acker
+	puller *puller
 
-	mu     sync.Mutex
-	closed bool
+	// mu ensures that cleanup only happens once, and concurrent Stop
+	// invocations block until cleanup completes.
+	mu sync.Mutex
+
+	// closed is used to signal that Stop has been called.
+	closed chan struct{}
 }
 
 // newIterator starts a new Iterator.  Stop must be called on the Iterator
 // when it is no longer needed.
 // subName is the full name of the subscription to pull messages from.
-func (c *Client) newIterator(ctx context.Context, subName string, po *pullOptions) *Iterator {
-	it := &Iterator{
-		sub: subName,
-		ctx: ctx,
-		c:   c,
-	}
-
+// ctx is the context to use for acking messages and extending message deadlines.
+func newIterator(ctx context.Context, s service, subName string, po *pullOptions) *Iterator {
 	// TODO: make kaTicker frequency more configurable.
 	// (ackDeadline - 5s) is a reasonable default for now, because the minimum ack period is 10s.  This gives us 5s grace.
 	keepAlivePeriod := po.ackDeadline - 5*time.Second
-	it.kaTicker = time.NewTicker(keepAlivePeriod) // Stopped in it.Stop
-	it.ka = keepAlive{
-		Client:        it.c,
-		Ctx:           it.ctx,
-		Sub:           it.sub,
-		ExtensionTick: it.kaTicker.C,
-		Deadline:      po.ackDeadline,
-		MaxExtension:  po.maxExtension,
-	}
+	kaTicker := time.NewTicker(keepAlivePeriod) // Stopped in it.Stop
 
 	// TODO: make ackTicker more configurable.  Something less than
 	// kaTicker is a reasonable default (there's no point extending
 	// messages when they could be acked instead).
-	it.ackTicker = time.NewTicker(keepAlivePeriod / 2) // Stopped in it.Stop
-	it.acker = acker{
-		Client:  it.c,
-		Ctx:     it.ctx,
-		Sub:     it.sub,
-		AckTick: it.ackTicker.C,
-		Notify:  it.ka.Remove,
+	ackTicker := time.NewTicker(keepAlivePeriod / 2) // Stopped in it.Stop
+
+	ka := &keepAlive{
+		s:             s,
+		Ctx:           ctx,
+		Sub:           subName,
+		ExtensionTick: kaTicker.C,
+		Deadline:      po.ackDeadline,
+		MaxExtension:  po.maxExtension,
 	}
 
-	it.puller = puller{
-		Client:    it.c,
-		Sub:       it.sub,
-		BatchSize: int64(po.maxPrefetch),
-		Notify:    it.ka.Add,
+	ack := &acker{
+		s:       s,
+		Ctx:     ctx,
+		Sub:     subName,
+		AckTick: ackTicker.C,
+		Notify:  ka.Remove,
 	}
 
-	it.ka.Start()
-	it.acker.Start()
-	return it
+	pull := newPuller(s, subName, ctx, int64(po.maxPrefetch), ka.Add, ka.Remove)
+
+	ka.Start()
+	ack.Start()
+	return &Iterator{
+		kaTicker:  kaTicker,
+		ackTicker: ackTicker,
+		ka:        ka,
+		acker:     ack,
+		puller:    pull,
+		closed:    make(chan struct{}),
+	}
 }
 
-// Next returns the next Message to be processed.  The caller must call Done on
-// the returned Message when finished with it.
-// Once Stop has been called, subsequent calls to Next will return io.EOF.
+// Next returns the next Message to be processed.  The caller must call
+// Message.Done when finished with it.
+// Once Stop has been called, calls to Next will return Done.
 func (it *Iterator) Next() (*Message, error) {
-	it.mu.Lock()
-	defer it.mu.Unlock()
-	if it.closed {
-		return nil, io.EOF
+	m, err := it.puller.Next()
+
+	if err == nil {
+		m.it = it
+		return m, nil
 	}
 
 	select {
-	case <-it.ctx.Done():
-		return nil, it.ctx.Err()
+	// If Stop has been called, we return Done regardless the value of err.
+	case <-it.closed:
+		return nil, Done
 	default:
-	}
-
-	// Note: this is the only place where messages are added to keepAlive,
-	// and this code is protected by mu. This means once an iterator starts
-	// being closed down, no more messages will be added to keepalive.
-	m, err := it.puller.Next(it.ctx)
-	if err != nil {
 		return nil, err
 	}
-	m.it = it
-	return m, nil
 }
 
 // Client code must call Stop on an Iterator when finished with it.
@@ -124,34 +116,35 @@
 // Stop need only be called once, but may be called multiple times from
 // multiple goroutines.
 func (it *Iterator) Stop() {
-	// TODO: test calling from multiple goroutines.
 	it.mu.Lock()
 	defer it.mu.Unlock()
-	if it.closed {
-		// early return ensures that it.ka.Stop is only called once.
-		return
-	}
-	it.closed = true
 
-	// Remove messages that are being kept alive, but have not been
-	// supplied to the caller yet.  Then the only messages being kept alive
-	// will be those that have been supplied to the caller but have not yet
-	// had their Done method called.
-	for _, m := range it.puller.Pending() {
-		it.ka.Remove(m.AckID)
+	select {
+	case <-it.closed:
+		// Cleanup has already been performed.
+		return
+	default:
 	}
 
+	// We close this channel before calling it.puller.Stop to ensure that we
+	// reliably return Done from Next.
+	close(it.closed)
+
+	// Stop the puller. Once this completes, no more messages will be added
+	// to it.ka.
+	it.puller.Stop()
+
 	// Start acking messages as they arrive, ignoring ackTicker.  This will
 	// result in it.ka.Stop, below, returning as soon as possible.
 	it.acker.FastMode()
 
 	// This will block until
-	//   (a) it.Ctx is done, or
+	//   (a) it.ka.Ctx is done, or
 	//   (b) all messages have been removed from keepAlive.
 	// (b) will happen once all outstanding messages have been either ACKed or NACKed.
 	it.ka.Stop()
 
-	// There are no more live messages that we care about, so kill off the acker.
+	// There are no more live messages, so kill off the acker.
 	it.acker.Stop()
 
 	it.kaTicker.Stop()
@@ -159,9 +152,6 @@
 }
 
 func (it *Iterator) done(ackID string, ack bool) {
-	// NOTE: this method does not lock mu, because it's fine for done to be
-	// called while the iterator is in the process of being closed.  In
-	// fact, this is the only way to drain oustanding messages.
 	if ack {
 		it.acker.Ack(ackID)
 		// There's no need to call it.ka.Remove here, as acker will
diff --git a/go/src/google.golang.org/cloud/pubsub/iterator_test.go b/go/src/google.golang.org/cloud/pubsub/iterator_test.go
new file mode 100644
index 0000000..53bb85f
--- /dev/null
+++ b/go/src/google.golang.org/cloud/pubsub/iterator_test.go
@@ -0,0 +1,247 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pubsub
+
+import (
+	"fmt"
+	"reflect"
+	"testing"
+	"time"
+
+	"golang.org/x/net/context"
+)
+
+func TestReturnsDoneOnStop(t *testing.T) {
+	type testCase struct {
+		abort func(*Iterator, context.CancelFunc)
+		want  error
+	}
+
+	for _, tc := range []testCase{
+		{
+			abort: func(it *Iterator, cancel context.CancelFunc) {
+				it.Stop()
+			},
+			want: Done,
+		},
+		{
+			abort: func(it *Iterator, cancel context.CancelFunc) {
+				cancel()
+			},
+			want: context.Canceled,
+		},
+		{
+			abort: func(it *Iterator, cancel context.CancelFunc) {
+				it.Stop()
+				cancel()
+			},
+			want: Done,
+		},
+		{
+			abort: func(it *Iterator, cancel context.CancelFunc) {
+				cancel()
+				it.Stop()
+			},
+			want: Done,
+		},
+	} {
+		s := &blockingFetch{}
+		ctx, cancel := context.WithCancel(context.Background())
+		it := newIterator(ctx, s, "subname", &pullOptions{ackDeadline: time.Second * 10, maxExtension: time.Hour})
+		defer it.Stop()
+		tc.abort(it, cancel)
+
+		_, err := it.Next()
+		if err != tc.want {
+			t.Errorf("iterator Next error after abort: got:\n%v\nwant:\n%v", err, tc.want)
+		}
+	}
+}
+
+// blockingFetch implements message fetching by not returning until its context is cancelled.
+type blockingFetch struct {
+	service
+}
+
+func (s *blockingFetch) fetchMessages(ctx context.Context, subName string, maxMessages int64) ([]*Message, error) {
+	<-ctx.Done()
+	return nil, ctx.Err()
+}
+
+// justInTimeFetch simulates the situation where the iterator is aborted just after the fetch RPC
+// succeeds, so the rest of puller.Next will continue to execute and return sucessfully.
+type justInTimeFetch struct {
+	service
+}
+
+func (s *justInTimeFetch) fetchMessages(ctx context.Context, subName string, maxMessages int64) ([]*Message, error) {
+	<-ctx.Done()
+	// The context was cancelled, but let's pretend that this happend just after our RPC returned.
+
+	var result []*Message
+	for i := 0; i < int(maxMessages); i++ {
+		val := fmt.Sprintf("msg%v", i)
+		result = append(result, &Message{Data: []byte(val), AckID: val})
+	}
+	return result, nil
+}
+
+func (s *justInTimeFetch) splitAckIDs(ids []string) ([]string, []string) {
+	return nil, nil
+}
+
+func (s *justInTimeFetch) modifyAckDeadline(ctx context.Context, subName string, deadline time.Duration, ackIDs []string) error {
+	return nil
+}
+
+func TestAfterAbortReturnsNoMoreThanOneMessage(t *testing.T) {
+	// Each test case is excercised by making two concurrent blocking calls on an
+	// Iterator, and then aborting the iterator.
+	// The result should be one call to Next returning a message, and the other returning an error.
+	type testCase struct {
+		abort func(*Iterator, context.CancelFunc)
+		// want is the error that should be returned from one Next invocation.
+		want error
+	}
+	for n := 1; n < 3; n++ {
+		for _, tc := range []testCase{
+			{
+				abort: func(it *Iterator, cancel context.CancelFunc) {
+					it.Stop()
+				},
+				want: Done,
+			},
+			{
+				abort: func(it *Iterator, cancel context.CancelFunc) {
+					cancel()
+				},
+				want: context.Canceled,
+			},
+			{
+				abort: func(it *Iterator, cancel context.CancelFunc) {
+					it.Stop()
+					cancel()
+				},
+				want: Done,
+			},
+			{
+				abort: func(it *Iterator, cancel context.CancelFunc) {
+					cancel()
+					it.Stop()
+				},
+				want: Done,
+			},
+		} {
+			s := &justInTimeFetch{}
+			ctx, cancel := context.WithCancel(context.Background())
+
+			// if maxPrefetch == 1, there will be no messages in the puller buffer when Next is invoked the second time.
+			// if maxPrefetch == 2, there will be 1 message in the puller buffer when Next is invoked the second time.
+			po := &pullOptions{
+				ackDeadline:  time.Second * 10,
+				maxExtension: time.Hour,
+				maxPrefetch:  n,
+			}
+			it := newIterator(ctx, s, "subname", po)
+			defer it.Stop()
+
+			type result struct {
+				m   *Message
+				err error
+			}
+			results := make(chan *result, 2)
+
+			for i := 0; i < 2; i++ {
+				go func() {
+					m, err := it.Next()
+					results <- &result{m, err}
+					if err == nil {
+						m.Done(false)
+					}
+				}()
+			}
+			// Wait for goroutines to block on it.Next().
+			time.Sleep(time.Millisecond)
+			tc.abort(it, cancel)
+
+			result1 := <-results
+			result2 := <-results
+
+			// There should be one error result, and one non-error result.
+			// Make result1 be the non-error result.
+			if result1.err != nil {
+				result1, result2 = result2, result1
+			}
+
+			if string(result1.m.Data) != "msg0" {
+				t.Errorf("After abort, got message: %v, want %v", result1.m.Data, "msg0")
+			}
+			if result1.err != nil {
+				t.Errorf("After abort, got : %v, want nil", result1.err)
+			}
+			if result2.m != nil {
+				t.Errorf("After abort, got message: %v, want nil", result2.m)
+			}
+			if result2.err != tc.want {
+				t.Errorf("After abort, got err: %v, want %v", result2.err, tc.want)
+			}
+		}
+	}
+}
+
+func TestMultipleStopCallsBlockUntilMessageDone(t *testing.T) {
+	s := &fetcherService{
+		results: []fetchResult{
+			{
+				msgs: []*Message{{AckID: "a"}, {AckID: "b"}},
+			},
+		},
+	}
+
+	ctx := context.Background()
+	it := newIterator(ctx, s, "subname", &pullOptions{ackDeadline: time.Second * 10, maxExtension: 0})
+
+	m, err := it.Next()
+	if err != nil {
+		t.Errorf("error calling Next: %v", err)
+	}
+
+	events := make(chan string, 3)
+	go func() {
+		it.Stop()
+		events <- "stopped"
+	}()
+	go func() {
+		it.Stop()
+		events <- "stopped"
+	}()
+
+	time.Sleep(10 * time.Millisecond)
+	events <- "nacked"
+	m.Done(false)
+
+	if got, want := []string{<-events, <-events, <-events}, []string{"nacked", "stopped", "stopped"}; !reflect.DeepEqual(got, want) {
+		t.Errorf("stopping iterator, got: %v ; want: %v", got, want)
+	}
+
+	// The iterator is stopped, so should not return another message.
+	m, err = it.Next()
+	if m != nil {
+		t.Errorf("message got: %v ; want: nil", m)
+	}
+	if err != Done {
+		t.Errorf("err got: %v ; want: %v", err, Done)
+	}
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/keepalive.go b/go/src/google.golang.org/cloud/pubsub/keepalive.go
index ed7075a..11b5739 100644
--- a/go/src/google.golang.org/cloud/pubsub/keepalive.go
+++ b/go/src/google.golang.org/cloud/pubsub/keepalive.go
@@ -25,7 +25,7 @@
 // periodically extends them.
 // Messages are tracked by Ack ID.
 type keepAlive struct {
-	Client        *Client
+	s             service
 	Ctx           context.Context  // The context to use when extending deadlines.
 	Sub           string           // The full name of the subscription.
 	ExtensionTick <-chan time.Time // ExtenstionTick supplies the frequency with which to make extension requests.
@@ -71,9 +71,11 @@
 }
 
 // Add adds an ack id to be kept alive.
+// It should not be called after Stop.
 func (ka *keepAlive) Add(ackID string) {
 	ka.mu.Lock()
 	defer ka.mu.Unlock()
+
 	ka.items[ackID] = time.Now().Add(ka.MaxExtension)
 	ka.dr.SetPending(true)
 }
@@ -82,6 +84,9 @@
 func (ka *keepAlive) Remove(ackID string) {
 	ka.mu.Lock()
 	defer ka.mu.Unlock()
+
+	// Note: If users NACKs a message after it has been removed due to
+	// expiring, Remove will be called twice with same ack id.  This is OK.
 	delete(ka.items, ackID)
 	ka.dr.SetPending(len(ka.items) != 0)
 }
@@ -114,12 +119,30 @@
 	return live, expired
 }
 
+const maxExtensionAttempts = 2
+
 func (ka *keepAlive) extendDeadlines(ackIDs []string) {
-	// TODO: split into separate requests if there are too many ackIDs.
-	if len(ackIDs) > 0 {
-		_ = ka.Client.s.modifyAckDeadline(ka.Ctx, ka.Sub, ka.Deadline, ackIDs)
+	head, tail := ka.s.splitAckIDs(ackIDs)
+	for len(head) > 0 {
+		for i := 0; i < maxExtensionAttempts; i++ {
+			if ka.s.modifyAckDeadline(ka.Ctx, ka.Sub, ka.Deadline, head) == nil {
+				break
+			}
+		}
+		// NOTE: Messages whose deadlines we fail to extend will
+		// eventually be redelivered and this is a documented behaviour
+		// of the API.
+		//
+		// NOTE: If we fail to extend deadlines here, this
+		// implementation will continue to attempt extending the
+		// deadlines for those ack IDs the next time the extension
+		// ticker ticks.  By then the deadline will have expired.
+		// Re-extending them is harmless, however.
+		//
+		// TODO: call Remove for ids which fail to be extended.
+
+		head, tail = ka.s.splitAckIDs(tail)
 	}
-	// TODO: retry on error.  NOTE: if we ultimately fail to extend deadlines here, the messages will be redelivered, which is OK.
 }
 
 // A drain (once started) indicates via a channel when there is no work pending.
@@ -145,6 +168,12 @@
 
 func (d *drain) closeIfDrained() {
 	if !d.pending && d.started {
-		close(d.Drained)
+		// Check to see if d.Drained is closed before closing it.
+		// This allows SetPending(false) to be safely called multiple times.
+		select {
+		case <-d.Drained:
+		default:
+			close(d.Drained)
+		}
 	}
 }
diff --git a/go/src/google.golang.org/cloud/pubsub/keepalive_test.go b/go/src/google.golang.org/cloud/pubsub/keepalive_test.go
index 3f7e64a..5ae36f4 100644
--- a/go/src/google.golang.org/cloud/pubsub/keepalive_test.go
+++ b/go/src/google.golang.org/cloud/pubsub/keepalive_test.go
@@ -15,6 +15,7 @@
 package pubsub
 
 import (
+	"errors"
 	"reflect"
 	"sort"
 	"sync"
@@ -28,7 +29,6 @@
 	tick := make(chan time.Time)
 	deadline := time.Nanosecond * 15
 	s := &testService{modDeadlineCalled: make(chan modDeadlineCall)}
-	c := &Client{projectID: "projid", s: s}
 
 	checkModDeadlineCall := func(ackIDs []string) {
 		got := <-s.modDeadlineCalled
@@ -46,7 +46,7 @@
 	}
 
 	ka := &keepAlive{
-		Client:        c,
+		s:             s,
 		Ctx:           context.Background(),
 		Sub:           "subname",
 		ExtensionTick: tick,
@@ -80,10 +80,9 @@
 	defer ticker.Stop()
 
 	s := &testService{modDeadlineCalled: make(chan modDeadlineCall, 100)}
-	c := &Client{projectID: "projid", s: s}
 
 	ka := &keepAlive{
-		Client:        c,
+		s:             s,
 		Ctx:           context.Background(),
 		ExtensionTick: ticker.C,
 		MaxExtension:  time.Hour,
@@ -134,11 +133,10 @@
 	defer ticker.Stop()
 
 	s := &testService{modDeadlineCalled: make(chan modDeadlineCall, 100)}
-	c := &Client{projectID: "projid", s: s}
 
 	maxExtension := time.Millisecond
 	ka := &keepAlive{
-		Client:        c,
+		s:             s,
 		Ctx:           context.Background(),
 		ExtensionTick: ticker.C,
 		MaxExtension:  maxExtension,
@@ -162,11 +160,10 @@
 
 func TestKeepAliveStopsWhenAllAckIDsRemoved(t *testing.T) {
 	s := &testService{}
-	c := &Client{projectID: "projid", s: s}
 
 	maxExtension := time.Millisecond
 	ka := &keepAlive{
-		Client:        c,
+		s:             s,
 		Ctx:           context.Background(),
 		ExtensionTick: make(chan time.Time),
 		MaxExtension:  maxExtension,
@@ -197,11 +194,10 @@
 	defer ticker.Stop()
 
 	s := &testService{modDeadlineCalled: make(chan modDeadlineCall, 100)}
-	c := &Client{projectID: "projid", s: s}
 
 	maxExtension := time.Millisecond
 	ka := &keepAlive{
-		Client:        c,
+		s:             s,
 		Ctx:           context.Background(),
 		ExtensionTick: ticker.C,
 		MaxExtension:  maxExtension,
@@ -222,3 +218,102 @@
 		t.Fatalf("keepalive failed to stop before maxExtension deadline")
 	}
 }
+
+// extendCallResult contains a list of ackIDs which are expected in an ackID
+// extension request, along with the result that should be returned.
+type extendCallResult struct {
+	ackIDs []string
+	err    error
+}
+
+// extendService implements modifyAckDeadline using a hard-coded list of extendCallResults.
+type extendService struct {
+	service
+
+	calls []extendCallResult
+
+	t *testing.T // used for error logging.
+}
+
+func (es *extendService) modifyAckDeadline(ctx context.Context, subName string, deadline time.Duration, ackIDs []string) error {
+	if len(es.calls) == 0 {
+		es.t.Fatalf("unexpected call to modifyAckDeadline: ackIDs: %v", ackIDs)
+	}
+	call := es.calls[0]
+	es.calls = es.calls[1:]
+
+	if got, want := ackIDs, call.ackIDs; !reflect.DeepEqual(got, want) {
+		es.t.Errorf("unexpected arguments to modifyAckDeadline: got: %v ; want: %v", got, want)
+	}
+	return call.err
+}
+
+// Test implementation returns the first 2 elements as head, and the rest as tail.
+func (es *extendService) splitAckIDs(ids []string) ([]string, []string) {
+	if len(ids) < 2 {
+		return ids, nil
+	}
+	return ids[:2], ids[2:]
+}
+func TestKeepAliveSplitsBatches(t *testing.T) {
+	type testCase struct {
+		calls []extendCallResult
+	}
+	for _, tc := range []testCase{
+		{
+			calls: []extendCallResult{
+				{
+					ackIDs: []string{"a", "b"},
+				},
+				{
+					ackIDs: []string{"c", "d"},
+				},
+				{
+					ackIDs: []string{"e", "f"},
+				},
+			},
+		},
+		{
+			calls: []extendCallResult{
+				{
+					ackIDs: []string{"a", "b"},
+					err:    errors.New("bang"),
+				},
+				// On error we retry once.
+				{
+					ackIDs: []string{"a", "b"},
+					err:    errors.New("bang"),
+				},
+				// We give up after failing twice, so we move on to the next set, "c" and "d"
+				{
+					ackIDs: []string{"c", "d"},
+					err:    errors.New("bang"),
+				},
+				// Again, we retry once.
+				{
+					ackIDs: []string{"c", "d"},
+				},
+				{
+					ackIDs: []string{"e", "f"},
+				},
+			},
+		},
+	} {
+		s := &extendService{
+			t:     t,
+			calls: tc.calls,
+		}
+
+		ka := &keepAlive{
+			s:   s,
+			Ctx: context.Background(),
+			Sub: "subname",
+		}
+
+		ka.extendDeadlines([]string{"a", "b", "c", "d", "e", "f"})
+
+		if len(s.calls) != 0 {
+			t.Errorf("expected extend calls did not occur: %v", s.calls)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/legacy.go b/go/src/google.golang.org/cloud/pubsub/legacy.go
index 7843d01..e07bb00 100644
--- a/go/src/google.golang.org/cloud/pubsub/legacy.go
+++ b/go/src/google.golang.org/cloud/pubsub/legacy.go
@@ -42,7 +42,7 @@
 
 // DeleteTopic deletes the specified topic.
 //
-// Deprecated: Use TopicHandle.Delete instead.
+// Deprecated: Use Topic.Delete instead.
 func DeleteTopic(ctx context.Context, name string) error {
 	_, err := rawService(ctx).Projects.Topics.Delete(fullTopicName(internal.ProjID(ctx), name)).Do()
 	return err
@@ -50,7 +50,7 @@
 
 // TopicExists returns true if a topic exists with the specified name.
 //
-// Deprecated: Use TopicHandle.Exists instead.
+// Deprecated: Use Topic.Exists instead.
 func TopicExists(ctx context.Context, name string) (bool, error) {
 	_, err := rawService(ctx).Projects.Topics.Get(fullTopicName(internal.ProjID(ctx), name)).Do()
 	if e, ok := err.(*googleapi.Error); ok && e.Code == http.StatusNotFound {
@@ -64,7 +64,7 @@
 
 // DeleteSub deletes the subscription.
 //
-// Deprecated: Use SubscriptionHandle.Delete instead.
+// Deprecated: Use Subscription.Delete instead.
 func DeleteSub(ctx context.Context, name string) error {
 	_, err := rawService(ctx).Projects.Subscriptions.Delete(fullSubName(internal.ProjID(ctx), name)).Do()
 	return err
@@ -72,7 +72,7 @@
 
 // SubExists returns true if subscription exists.
 //
-// Deprecated: Use SubscriptionHandle.Exists instead.
+// Deprecated: Use Subscription.Exists instead.
 func SubExists(ctx context.Context, name string) (bool, error) {
 	_, err := rawService(ctx).Projects.Subscriptions.Get(fullSubName(internal.ProjID(ctx), name)).Do()
 	if e, ok := err.(*googleapi.Error); ok && e.Code == http.StatusNotFound {
@@ -198,7 +198,7 @@
 // Publish publishes messages to the topic's subscribers. It returns
 // message IDs upon success.
 //
-// Deprecated: Use TopicHandle.Publish instead.
+// Deprecated: Use Topic.Publish instead.
 func Publish(ctx context.Context, topic string, msgs ...*Message) ([]string, error) {
 	var rawMsgs []*raw.PubsubMessage
 	if len(msgs) == 0 {
@@ -227,7 +227,7 @@
 // to handle push notifications coming from the Pub/Sub backend
 // for the specified subscription.
 //
-// Deprecated: Use SubscriptionHandle.ModifyPushConfig instead.
+// Deprecated: Use Subscription.ModifyPushConfig instead.
 func ModifyPushEndpoint(ctx context.Context, sub, endpoint string) error {
 	_, err := rawService(ctx).Projects.Subscriptions.ModifyPushConfig(fullSubName(internal.ProjID(ctx), sub), &raw.ModifyPushConfigRequest{
 		PushConfig: &raw.PushConfig{
diff --git a/go/src/google.golang.org/cloud/pubsub/pubsub.go b/go/src/google.golang.org/cloud/pubsub/pubsub.go
index 65bf8c4..e330535 100644
--- a/go/src/google.golang.org/cloud/pubsub/pubsub.go
+++ b/go/src/google.golang.org/cloud/pubsub/pubsub.go
@@ -12,12 +12,6 @@
 // See the License for the specific language governing permissions and
 // limitations under the License.
 
-// Package pubsub contains a Google Cloud Pub/Sub client.
-//
-// This package is experimental and may make backwards-incompatible changes.
-//
-// More information about Google Cloud Pub/Sub is available at
-// https://cloud.google.com/pubsub/docs
 package pubsub // import "google.golang.org/cloud/pubsub"
 
 import (
@@ -65,6 +59,9 @@
 	}
 
 	s, err := newPubSubService(httpClient, endpoint)
+	if err != nil {
+		return nil, fmt.Errorf("constructing pubsub client: %v", err)
+	}
 
 	c := &Client{
 		projectID: projectID,
@@ -86,3 +83,52 @@
 	}
 	return prodAddr
 }
+
+// pageToken stores the next page token for a server response which is split over multiple pages.
+type pageToken struct {
+	tok      string
+	explicit bool
+}
+
+func (pt *pageToken) set(tok string) {
+	pt.tok = tok
+	pt.explicit = true
+}
+
+func (pt *pageToken) get() string {
+	return pt.tok
+}
+
+// more returns whether further pages should be fetched from the server.
+func (pt *pageToken) more() bool {
+	return pt.tok != "" || !pt.explicit
+}
+
+// stringsIterator provides an iterator API for a sequence of API page fetches that return lists of strings.
+type stringsIterator struct {
+	ctx     context.Context
+	strings []string
+	token   pageToken
+	fetch   func(ctx context.Context, tok string) (*stringsPage, error)
+}
+
+// Next returns the next string. If there are no more strings, Done will be returned.
+func (si *stringsIterator) Next() (string, error) {
+	for len(si.strings) == 0 && si.token.more() {
+		page, err := si.fetch(si.ctx, si.token.get())
+		if err != nil {
+			return "", err
+		}
+		si.token.set(page.tok)
+		si.strings = page.strings
+	}
+
+	if len(si.strings) == 0 {
+		return "", Done
+	}
+
+	s := si.strings[0]
+	si.strings = si.strings[1:]
+
+	return s, nil
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/puller.go b/go/src/google.golang.org/cloud/pubsub/puller.go
index bc4b434..cc786f4 100644
--- a/go/src/google.golang.org/cloud/pubsub/puller.go
+++ b/go/src/google.golang.org/cloud/pubsub/puller.go
@@ -22,46 +22,94 @@
 
 // puller fetches messages from the server in a batch.
 type puller struct {
-	Client *Client
-	Sub    string
+	ctx    context.Context
+	cancel context.CancelFunc
 
-	// The maximum number of messages to fetch at once.
-	// No more than BatchSize messages will be outstanding at any time.
-	BatchSize int64
+	// keepAlive takes ownership of the lifetime of the message identified
+	// by ackID, ensuring that its ack deadline does not expire. It should
+	// be called each time a new message is fetched from the server, even
+	// if it is not yet returned from Next.
+	keepAlive func(ackID string)
 
-	// A function to call when a new message is fetched from the server, but not yet returned from Next.
-	Notify func(ackID string)
+	// abandon should be called for each message which has previously been
+	// passed to keepAlive, but will never be returned by Next.
+	abandon func(ackID string)
+
+	// fetch fetches a batch of messages from the server.
+	fetch func() ([]*Message, error)
 
 	mu  sync.Mutex
 	buf []*Message
 }
 
+// newPuller constructs a new puller.
+// batchSize is the maximum number of messages to fetch at once.
+// No more than batchSize messages will be outstanding at any time.
+func newPuller(s service, subName string, ctx context.Context, batchSize int64, keepAlive, abandon func(ackID string)) *puller {
+	ctx, cancel := context.WithCancel(ctx)
+	return &puller{
+		cancel:    cancel,
+		keepAlive: keepAlive,
+		abandon:   abandon,
+		ctx:       ctx,
+		fetch:     func() ([]*Message, error) { return s.fetchMessages(ctx, subName, batchSize) },
+	}
+}
+
+const maxPullAttempts = 2
+
 // Next returns the next message from the server, fetching a new batch if necessary.
-// Notify is called with the ackIDs of newly fetched messages.
-func (p *puller) Next(ctx context.Context) (*Message, error) {
+// keepAlive is called with the ackIDs of newly fetched messages.
+// If p.Ctx has already been cancelled before Next is called, no new messages
+// will be fetched.
+func (p *puller) Next() (*Message, error) {
 	p.mu.Lock()
 	defer p.mu.Unlock()
 
+	// If ctx has been cancelled, return straight away (even if there are buffered messages available).
+	select {
+	case <-p.ctx.Done():
+		return nil, p.ctx.Err()
+	default:
+	}
+
 	for len(p.buf) == 0 {
+		var buf []*Message
 		var err error
-		p.buf, err = p.Client.s.fetchMessages(ctx, p.Sub, p.BatchSize)
+
+		for i := 0; i < maxPullAttempts; i++ {
+			// Once Stop has completed, all future calls to Next will immediately fail at this point.
+			buf, err = p.fetch()
+			if err == nil || err == context.Canceled || err == context.DeadlineExceeded {
+				break
+			}
+		}
 		if err != nil {
-			// TODO: retry before giving up.
 			return nil, err
 		}
-		for _, m := range p.buf {
-			p.Notify(m.AckID)
+
+		for _, m := range buf {
+			p.keepAlive(m.AckID)
 		}
+		p.buf = buf
 	}
+
 	m := p.buf[0]
 	p.buf = p.buf[1:]
 	return m, nil
 }
 
-// Pending returns the list of messages that have been fetched from the server
-// but not yet returned via Next.
-func (p *puller) Pending() []*Message {
+// Stop aborts any pending calls to Next, and prevents any future ones from succeeding.
+// Stop also abandons any messages that have been pre-fetched.
+// Once Stop completes, no calls to Next will succeed.
+func (p *puller) Stop() {
+	// Next may be executing in another goroutine. Cancel it, and then wait until it terminates.
+	p.cancel()
 	p.mu.Lock()
 	defer p.mu.Unlock()
-	return p.buf
+
+	for _, m := range p.buf {
+		p.abandon(m.AckID)
+	}
+	p.buf = nil
 }
diff --git a/go/src/google.golang.org/cloud/pubsub/puller_test.go b/go/src/google.golang.org/cloud/pubsub/puller_test.go
index b155a3c..4566b07 100644
--- a/go/src/google.golang.org/cloud/pubsub/puller_test.go
+++ b/go/src/google.golang.org/cloud/pubsub/puller_test.go
@@ -22,48 +22,56 @@
 	"golang.org/x/net/context"
 )
 
+type fetchResult struct {
+	msgs []*Message
+	err  error
+}
+
 type fetcherService struct {
 	service
-	msgs [][]*Message
+	results        []fetchResult
+	unexpectedCall bool
 }
 
 func (s *fetcherService) fetchMessages(ctx context.Context, subName string, maxMessages int64) ([]*Message, error) {
-	if len(s.msgs) == 0 {
+	if len(s.results) == 0 {
+		s.unexpectedCall = true
 		return nil, errors.New("bang")
 	}
-	ret := s.msgs[0]
-	s.msgs = s.msgs[1:]
-	return ret, nil
+	ret := s.results[0]
+	s.results = s.results[1:]
+	return ret.msgs, ret.err
 }
 
 func TestPuller(t *testing.T) {
 	s := &fetcherService{
-		msgs: [][]*Message{
-			{{AckID: "a"}, {AckID: "b"}},
+		results: []fetchResult{
+			{
+				msgs: []*Message{{AckID: "a"}, {AckID: "b"}},
+			},
 			{},
-			{{AckID: "c"}, {AckID: "d"}},
-			{{AckID: "e"}},
+			{
+				msgs: []*Message{{AckID: "c"}, {AckID: "d"}},
+			},
+			{
+				msgs: []*Message{{AckID: "e"}},
+			},
 		},
 	}
-	c := &Client{projectID: "projid", s: s}
 
 	pulled := make(chan string, 10)
-	pull := &puller{
-		Client:    c,
-		Sub:       "subname",
-		BatchSize: 2,
-		Notify:    func(ackID string) { pulled <- ackID },
-	}
+
+	pull := newPuller(s, "subname", context.Background(), 2, func(ackID string) { pulled <- ackID }, func(string) {})
 
 	got := []string{}
 	for i := 0; i < 5; i++ {
-		m, err := pull.Next(context.Background())
+		m, err := pull.Next()
 		got = append(got, m.AckID)
 		if err != nil {
 			t.Errorf("unexpected err from pull.Next: %v", err)
 		}
 	}
-	_, err := pull.Next(context.Background())
+	_, err := pull.Next()
 	if err == nil {
 		t.Errorf("unexpected err from pull.Next: %v", err)
 	}
@@ -74,26 +82,25 @@
 	}
 }
 
-func TestPullerNotification(t *testing.T) {
+func TestPullerAddsToKeepAlive(t *testing.T) {
 	s := &fetcherService{
-		msgs: [][]*Message{
-			{{AckID: "a"}, {AckID: "b"}},
-			{{AckID: "c"}, {AckID: "d"}},
+		results: []fetchResult{
+			{
+				msgs: []*Message{{AckID: "a"}, {AckID: "b"}},
+			},
+			{
+				msgs: []*Message{{AckID: "c"}, {AckID: "d"}},
+			},
 		},
 	}
-	c := &Client{projectID: "projid", s: s}
 
 	pulled := make(chan string, 10)
-	pull := &puller{
-		Client:    c,
-		Sub:       "subname",
-		BatchSize: 2,
-		Notify:    func(ackID string) { pulled <- ackID },
-	}
+
+	pull := newPuller(s, "subname", context.Background(), 2, func(ackID string) { pulled <- ackID }, func(string) {})
 
 	got := []string{}
 	for i := 0; i < 3; i++ {
-		m, err := pull.Next(context.Background())
+		m, err := pull.Next()
 		got = append(got, m.AckID)
 		if err != nil {
 			t.Errorf("unexpected err from pull.Next: %v", err)
@@ -117,3 +124,31 @@
 		t.Errorf("pulled ack ids: got: %v ; want: %v", pulledIDs, want)
 	}
 }
+
+func TestPullerRetriesOnce(t *testing.T) {
+	bang := errors.New("bang")
+	s := &fetcherService{
+		results: []fetchResult{
+			{
+				err: bang,
+			},
+			{
+				err: bang,
+			},
+		},
+	}
+
+	pull := newPuller(s, "subname", context.Background(), 2, func(string) {}, func(string) {})
+
+	_, err := pull.Next()
+	if err != bang {
+		t.Errorf("pull.Next err got: %v, want: %v", err, bang)
+	}
+
+	if s.unexpectedCall {
+		t.Errorf("unexpected retry")
+	}
+	if len(s.results) != 0 {
+		t.Errorf("outstanding calls: got: %v, want: 0", len(s.results))
+	}
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/service.go b/go/src/google.golang.org/cloud/pubsub/service.go
index 7e5235d..e894ec2 100644
--- a/go/src/google.golang.org/cloud/pubsub/service.go
+++ b/go/src/google.golang.org/cloud/pubsub/service.go
@@ -32,7 +32,7 @@
 type service interface {
 	createSubscription(ctx context.Context, topicName, subName string, ackDeadline time.Duration, pushConfig *PushConfig) error
 	getSubscriptionConfig(ctx context.Context, subName string) (*SubscriptionConfig, string, error)
-	listProjectSubscriptions(ctx context.Context, projName string) ([]string, error)
+	listProjectSubscriptions(ctx context.Context, projName, pageTok string) (*stringsPage, error)
 	deleteSubscription(ctx context.Context, name string) error
 	subscriptionExists(ctx context.Context, name string) (bool, error)
 	modifyPushConfig(ctx context.Context, subName string, conf *PushConfig) error
@@ -40,15 +40,16 @@
 	createTopic(ctx context.Context, name string) error
 	deleteTopic(ctx context.Context, name string) error
 	topicExists(ctx context.Context, name string) (bool, error)
-	listProjectTopics(ctx context.Context, projName string) ([]string, error)
-	listTopicSubscriptions(ctx context.Context, topicName string) ([]string, error)
+	listProjectTopics(ctx context.Context, projName, pageTok string) (*stringsPage, error)
+	listTopicSubscriptions(ctx context.Context, topicName, pageTok string) (*stringsPage, error)
 
 	modifyAckDeadline(ctx context.Context, subName string, deadline time.Duration, ackIDs []string) error
 	fetchMessages(ctx context.Context, subName string, maxMessages int64) ([]*Message, error)
 	publishMessages(ctx context.Context, topicName string, msgs []*Message) ([]string, error)
 
 	// splitAckIDs divides ackIDs into
-	//  * a batch of a size which is suitable for passing to acknowledge, and
+	//  * a batch of a size which is suitable for passing to acknowledge or
+	//    modifyAckDeadline, and
 	//  * the rest.
 	splitAckIDs(ackIDs []string) ([]string, []string)
 
@@ -102,19 +103,22 @@
 	return sub, rawSub.Topic, err
 }
 
-func (s *apiService) listProjectSubscriptions(ctx context.Context, projName string) ([]string, error) {
-	subs := []string{}
-	err := s.s.Projects.Subscriptions.List(projName).
-		Pages(ctx, func(res *raw.ListSubscriptionsResponse) error {
-			for _, s := range res.Subscriptions {
-				subs = append(subs, s.Name)
-			}
-			return nil
-		})
+// stringsPage contains a list of strings and a token for fetching the next page.
+type stringsPage struct {
+	strings []string
+	tok     string
+}
+
+func (s *apiService) listProjectSubscriptions(ctx context.Context, projName, pageTok string) (*stringsPage, error) {
+	resp, err := s.s.Projects.Subscriptions.List(projName).PageToken(pageTok).Context(ctx).Do()
 	if err != nil {
 		return nil, err
 	}
-	return subs, nil
+	subs := []string{}
+	for _, sub := range resp.Subscriptions {
+		subs = append(subs, sub.Name)
+	}
+	return &stringsPage{subs, resp.NextPageToken}, nil
 }
 
 func (s *apiService) deleteSubscription(ctx context.Context, name string) error {
@@ -141,19 +145,16 @@
 	return err
 }
 
-func (s *apiService) listProjectTopics(ctx context.Context, projName string) ([]string, error) {
-	topics := []string{}
-	err := s.s.Projects.Topics.List(projName).
-		Pages(ctx, func(res *raw.ListTopicsResponse) error {
-			for _, topic := range res.Topics {
-				topics = append(topics, topic.Name)
-			}
-			return nil
-		})
+func (s *apiService) listProjectTopics(ctx context.Context, projName, pageTok string) (*stringsPage, error) {
+	resp, err := s.s.Projects.Topics.List(projName).PageToken(pageTok).Context(ctx).Do()
 	if err != nil {
 		return nil, err
 	}
-	return topics, nil
+	topics := []string{}
+	for _, topic := range resp.Topics {
+		topics = append(topics, topic.Name)
+	}
+	return &stringsPage{topics, resp.NextPageToken}, nil
 }
 
 func (s *apiService) deleteTopic(ctx context.Context, name string) error {
@@ -172,19 +173,16 @@
 	return false, err
 }
 
-func (s *apiService) listTopicSubscriptions(ctx context.Context, topicName string) ([]string, error) {
-	subs := []string{}
-	err := s.s.Projects.Topics.Subscriptions.List(topicName).
-		Pages(ctx, func(res *raw.ListTopicSubscriptionsResponse) error {
-			for _, s := range res.Subscriptions {
-				subs = append(subs, s)
-			}
-			return nil
-		})
+func (s *apiService) listTopicSubscriptions(ctx context.Context, topicName, pageTok string) (*stringsPage, error) {
+	resp, err := s.s.Projects.Topics.Subscriptions.List(topicName).PageToken(pageTok).Context(ctx).Do()
 	if err != nil {
 		return nil, err
 	}
-	return subs, nil
+	subs := []string{}
+	for _, sub := range resp.Subscriptions {
+		subs = append(subs, sub)
+	}
+	return &stringsPage{subs, resp.NextPageToken}, nil
 }
 
 func (s *apiService) modifyAckDeadline(ctx context.Context, subName string, deadline time.Duration, ackIDs []string) error {
@@ -199,8 +197,9 @@
 }
 
 // maxPayload is the maximum number of bytes to devote to actual ids in
-// acknowledgement requests.  Note that there is ~1K of constant overhead, plus
-// 3 bytes per ID (two quotes and a comma).  The total payload size may not exceed 512K.
+// acknowledgement or modifyAckDeadline requests.  Note that there is ~1K of
+// constant overhead, plus 3 bytes per ID (two quotes and a comma).  The total
+// payload size may not exceed 512K.
 const maxPayload = 500 * 1024
 const overheadPerID = 3 // 3 bytes of JSON
 
diff --git a/go/src/google.golang.org/cloud/pubsub/subscription.go b/go/src/google.golang.org/cloud/pubsub/subscription.go
index 1d6bc0f..9ade786 100644
--- a/go/src/google.golang.org/cloud/pubsub/subscription.go
+++ b/go/src/google.golang.org/cloud/pubsub/subscription.go
@@ -28,39 +28,54 @@
 // The default maximum number of messages that are prefetched from the server.
 const DefaultMaxPrefetch = 100
 
-// SubscriptionHandle is a reference to a PubSub subscription.
-type SubscriptionHandle struct {
-	c *Client
+// Subscription is a reference to a PubSub subscription.
+type Subscription struct {
+	s service
 
 	// The fully qualified identifier for the subscription, in the format "projects/<projid>/subscriptions/<name>"
 	name string
 }
 
 // Subscription creates a reference to a subscription.
-func (c *Client) Subscription(name string) *SubscriptionHandle {
-	return &SubscriptionHandle{
-		c:    c,
+func (c *Client) Subscription(name string) *Subscription {
+	return &Subscription{
+		s:    c.s,
 		name: fmt.Sprintf("projects/%s/subscriptions/%s", c.projectID, name),
 	}
 }
 
 // Name returns the globally unique name for the subscription.
-func (s *SubscriptionHandle) Name() string {
+func (s *Subscription) Name() string {
 	return s.name
 }
 
-// Subscriptions lists all of the subscriptions for the client's project.
-func (c *Client) Subscriptions(ctx context.Context) ([]*SubscriptionHandle, error) {
-	subNames, err := c.s.listProjectSubscriptions(ctx, c.fullyQualifiedProjectName())
+// Subscriptions returns an iterator which returns all of the subscriptions for the client's project.
+func (c *Client) Subscriptions(ctx context.Context) *SubscriptionIterator {
+	return &SubscriptionIterator{
+		s: c.s,
+		stringsIterator: stringsIterator{
+			ctx: ctx,
+			fetch: func(ctx context.Context, tok string) (*stringsPage, error) {
+				return c.s.listProjectSubscriptions(ctx, c.fullyQualifiedProjectName(), tok)
+			},
+		},
+	}
+}
+
+// SubscriptionIterator is an iterator that returns a series of subscriptions.
+type SubscriptionIterator struct {
+	s service
+	stringsIterator
+}
+
+// Next returns the next subscription. If there are no more subscriptions, Done will be returned.
+func (subs *SubscriptionIterator) Next() (*Subscription, error) {
+	subName, err := subs.stringsIterator.Next()
 	if err != nil {
 		return nil, err
 	}
 
-	subs := []*SubscriptionHandle{}
-	for _, s := range subNames {
-		subs = append(subs, &SubscriptionHandle{c: c, name: s})
-	}
-	return subs, nil
+	return &Subscription{s: subs.s, name: subName}, nil
 }
 
 // PushConfig contains configuration for subscriptions that operate in push mode.
@@ -74,7 +89,7 @@
 
 // Subscription config contains the configuration of a subscription.
 type SubscriptionConfig struct {
-	Topic      *TopicHandle
+	Topic      *Topic
 	PushConfig PushConfig
 
 	// The default maximum time after a subscriber receives a message
@@ -86,57 +101,57 @@
 }
 
 // Delete deletes the subscription.
-func (s *SubscriptionHandle) Delete(ctx context.Context) error {
-	return s.c.s.deleteSubscription(ctx, s.name)
+func (s *Subscription) Delete(ctx context.Context) error {
+	return s.s.deleteSubscription(ctx, s.name)
 }
 
 // Exists reports whether the subscription exists on the server.
-func (s *SubscriptionHandle) Exists(ctx context.Context) (bool, error) {
-	return s.c.s.subscriptionExists(ctx, s.name)
+func (s *Subscription) Exists(ctx context.Context) (bool, error) {
+	return s.s.subscriptionExists(ctx, s.name)
 }
 
 // Config fetches the current configuration for the subscription.
-func (s *SubscriptionHandle) Config(ctx context.Context) (*SubscriptionConfig, error) {
-	sub, topicName, err := s.c.s.getSubscriptionConfig(ctx, s.name)
+func (s *Subscription) Config(ctx context.Context) (*SubscriptionConfig, error) {
+	conf, topicName, err := s.s.getSubscriptionConfig(ctx, s.name)
 	if err != nil {
 		return nil, err
 	}
-	sub.Topic = &TopicHandle{
-		c:    s.c,
+	conf.Topic = &Topic{
+		s:    s.s,
 		name: topicName,
 	}
-	return sub, nil
+	return conf, nil
 }
 
 // Pull returns an Iterator that can be used to fetch Messages. The Iterator
 // will automatically extend the ack deadline of all fetched Messages, for the
-// period specified by DefaultMaxExtension. This may be overriden by supplying
+// period specified by DefaultMaxExtension. This may be overridden by supplying
 // a MaxExtension pull option.
 //
 // If ctx is cancelled or exceeds its deadline, outstanding acks or deadline
 // extensions will fail.
 //
 // The caller must call Stop on the Iterator once finished with it.
-func (s *SubscriptionHandle) Pull(ctx context.Context, opts ...PullOption) (*Iterator, error) {
+func (s *Subscription) Pull(ctx context.Context, opts ...PullOption) (*Iterator, error) {
 	config, err := s.Config(ctx)
 	if err != nil {
 		return nil, err
 	}
 	po := processPullOptions(opts)
 	po.ackDeadline = config.AckDeadline
-	return s.c.newIterator(ctx, s.name, po), nil
+	return newIterator(ctx, s.s, s.name, po), nil
 }
 
 // ModifyPushConfig updates the endpoint URL and other attributes of a push subscription.
-func (s *SubscriptionHandle) ModifyPushConfig(ctx context.Context, conf *PushConfig) error {
+func (s *Subscription) ModifyPushConfig(ctx context.Context, conf *PushConfig) error {
 	if conf == nil {
 		return errors.New("must supply non-nil PushConfig")
 	}
 
-	return s.c.s.modifyPushConfig(ctx, s.name, conf)
+	return s.s.modifyPushConfig(ctx, s.name, conf)
 }
 
-// A PullOption is an optional argument to SubscriptionHandle.Pull.
+// A PullOption is an optional argument to Subscription.Pull.
 type PullOption interface {
 	setOptions(o *pullOptions)
 }
@@ -226,7 +241,7 @@
 // pushConfig may be set to configure this subscription for push delivery.
 //
 // If the subscription already exists an error will be returned.
-func (c *Client) NewSubscription(ctx context.Context, name string, topic *TopicHandle, ackDeadline time.Duration, pushConfig *PushConfig) (*SubscriptionHandle, error) {
+func (c *Client) NewSubscription(ctx context.Context, name string, topic *Topic, ackDeadline time.Duration, pushConfig *PushConfig) (*Subscription, error) {
 	if ackDeadline == 0 {
 		ackDeadline = 10 * time.Second
 	}
diff --git a/go/src/google.golang.org/cloud/pubsub/subscription_test.go b/go/src/google.golang.org/cloud/pubsub/subscription_test.go
new file mode 100644
index 0000000..9571411
--- /dev/null
+++ b/go/src/google.golang.org/cloud/pubsub/subscription_test.go
@@ -0,0 +1,147 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pubsub
+
+import (
+	"errors"
+	"reflect"
+	"testing"
+
+	"golang.org/x/net/context"
+)
+
+type subListCall struct {
+	inTok, outTok string
+	subs          []string
+	err           error
+}
+
+type subListService struct {
+	service
+	calls []subListCall
+
+	t *testing.T // for error logging.
+}
+
+func (s *subListService) listSubs(pageTok string) (*stringsPage, error) {
+	if len(s.calls) == 0 {
+		s.t.Errorf("unexpected call: pageTok: %q", pageTok)
+		return nil, errors.New("bang")
+	}
+
+	call := s.calls[0]
+	s.calls = s.calls[1:]
+	if call.inTok != pageTok {
+		s.t.Errorf("page token: got: %v, want: %v", pageTok, call.inTok)
+	}
+	return &stringsPage{call.subs, call.outTok}, call.err
+}
+
+func (s *subListService) listProjectSubscriptions(ctx context.Context, projName, pageTok string) (*stringsPage, error) {
+	if projName != "projects/projid" {
+		s.t.Errorf("unexpected call: projName: %q, pageTok: %q", projName, pageTok)
+		return nil, errors.New("bang")
+	}
+	return s.listSubs(pageTok)
+}
+
+func (s *subListService) listTopicSubscriptions(ctx context.Context, topicName, pageTok string) (*stringsPage, error) {
+	if topicName != "projects/projid/topics/topic" {
+		s.t.Errorf("unexpected call: topicName: %q, pageTok: %q", topicName, pageTok)
+		return nil, errors.New("bang")
+	}
+	return s.listSubs(pageTok)
+}
+
+// All returns the remaining subscriptions from this iterator.
+func slurpSubs(it *SubscriptionIterator) ([]*Subscription, error) {
+	var subs []*Subscription
+	for {
+		switch sub, err := it.Next(); err {
+		case nil:
+			subs = append(subs, sub)
+		case Done:
+			return subs, nil
+		default:
+			return nil, err
+		}
+	}
+}
+
+func TestListProjectSubscriptions(t *testing.T) {
+	calls := []subListCall{
+		{
+			subs:   []string{"s1", "s2"},
+			outTok: "a",
+		},
+		{
+			inTok:  "a",
+			subs:   []string{"s3"},
+			outTok: "",
+		},
+	}
+	s := &subListService{calls: calls, t: t}
+	c := &Client{projectID: "projid", s: s}
+	subs, err := slurpSubs(c.Subscriptions(context.Background()))
+	if err != nil {
+		t.Errorf("error listing subscriptions: %v", err)
+	}
+	got := subNames(subs)
+	want := []string{"s1", "s2", "s3"}
+	if !reflect.DeepEqual(got, want) {
+		t.Errorf("sub list: got: %v, want: %v", got, want)
+	}
+	if len(s.calls) != 0 {
+		t.Errorf("outstanding calls: %v", s.calls)
+	}
+}
+
+func TestListTopicSubscriptions(t *testing.T) {
+	calls := []subListCall{
+		{
+			subs:   []string{"s1", "s2"},
+			outTok: "a",
+		},
+		{
+			inTok:  "a",
+			subs:   []string{"s3"},
+			outTok: "",
+		},
+	}
+	s := &subListService{calls: calls, t: t}
+	c := &Client{projectID: "projid", s: s}
+	subs, err := slurpSubs(c.Topic("topic").Subscriptions(context.Background()))
+	if err != nil {
+		t.Errorf("error listing subscriptions: %v", err)
+	}
+	got := subNames(subs)
+	want := []string{"s1", "s2", "s3"}
+	if !reflect.DeepEqual(got, want) {
+		t.Errorf("sub list: got: %v, want: %v", got, want)
+	}
+	if len(s.calls) != 0 {
+		t.Errorf("outstanding calls: %v", s.calls)
+	}
+}
+
+func subNames(subs []*Subscription) []string {
+	var names []string
+
+	for _, sub := range subs {
+		names = append(names, sub.name)
+
+	}
+	return names
+}
diff --git a/go/src/google.golang.org/cloud/pubsub/topic.go b/go/src/google.golang.org/cloud/pubsub/topic.go
index cc2cd81..38a49ed 100644
--- a/go/src/google.golang.org/cloud/pubsub/topic.go
+++ b/go/src/google.golang.org/cloud/pubsub/topic.go
@@ -22,9 +22,9 @@
 
 const MaxPublishBatchSize = 1000
 
-// TopicHandle is a reference to a PubSub topic.
-type TopicHandle struct {
-	c *Client
+// Topic is a reference to a PubSub topic.
+type Topic struct {
+	s service
 
 	// The fully qualified identifier for the topic, in the format "projects/<projid>/topics/<name>"
 	name string
@@ -36,73 +36,89 @@
 // tildes (~), plus (+) or percent signs (%). It must be between 3 and 255
 // characters in length, and must not start with "goog".
 // If the topic already exists an error will be returned.
-func (c *Client) NewTopic(ctx context.Context, name string) (*TopicHandle, error) {
+func (c *Client) NewTopic(ctx context.Context, name string) (*Topic, error) {
 	t := c.Topic(name)
 	err := c.s.createTopic(ctx, t.Name())
 	return t, err
 }
 
 // Topic creates a reference to a topic.
-func (c *Client) Topic(name string) *TopicHandle {
-	return &TopicHandle{c: c, name: fmt.Sprintf("projects/%s/topics/%s", c.projectID, name)}
+func (c *Client) Topic(name string) *Topic {
+	return &Topic{s: c.s, name: fmt.Sprintf("projects/%s/topics/%s", c.projectID, name)}
 }
 
-// Topics lists all of the topics for the client's project.
-func (c *Client) Topics(ctx context.Context) ([]*TopicHandle, error) {
-	topicNames, err := c.s.listProjectTopics(ctx, c.fullyQualifiedProjectName())
+// Topics returns an iterator which returns all of the topics for the client's project.
+func (c *Client) Topics(ctx context.Context) *TopicIterator {
+	return &TopicIterator{
+		s: c.s,
+		stringsIterator: stringsIterator{
+			ctx: ctx,
+			fetch: func(ctx context.Context, tok string) (*stringsPage, error) {
+				return c.s.listProjectTopics(ctx, c.fullyQualifiedProjectName(), tok)
+			},
+		},
+	}
+}
+
+// TopicIterator is an iterator that returns a series of topics.
+type TopicIterator struct {
+	s service
+	stringsIterator
+}
+
+// Next returns the next topic. If there are no more topics, Done will be returned.
+func (tps *TopicIterator) Next() (*Topic, error) {
+	topicName, err := tps.stringsIterator.Next()
 	if err != nil {
 		return nil, err
 	}
-
-	topics := []*TopicHandle{}
-	for _, t := range topicNames {
-		topics = append(topics, &TopicHandle{c: c, name: t})
-	}
-	return topics, nil
+	return &Topic{s: tps.s, name: topicName}, nil
 }
 
 // Name returns the globally unique name for the topic.
-func (t *TopicHandle) Name() string {
+func (t *Topic) Name() string {
 	return t.name
 }
 
 // Delete deletes the topic.
-func (t *TopicHandle) Delete(ctx context.Context) error {
-	return t.c.s.deleteTopic(ctx, t.name)
+func (t *Topic) Delete(ctx context.Context) error {
+	return t.s.deleteTopic(ctx, t.name)
 }
 
 // Exists reports whether the topic exists on the server.
-func (t *TopicHandle) Exists(ctx context.Context) (bool, error) {
+func (t *Topic) Exists(ctx context.Context) (bool, error) {
 	if t.name == "_deleted-topic_" {
 		return false, nil
 	}
 
-	return t.c.s.topicExists(ctx, t.name)
+	return t.s.topicExists(ctx, t.name)
 }
 
-// Subscriptions lists the subscriptions for this topic.
-func (t *TopicHandle) Subscriptions(ctx context.Context) ([]*SubscriptionHandle, error) {
-	subNames, err := t.c.s.listTopicSubscriptions(ctx, t.name)
-	if err != nil {
-		return nil, err
-	}
+// Subscriptions returns an iterator which returns the subscriptions for this topic.
+func (t *Topic) Subscriptions(ctx context.Context) *SubscriptionIterator {
+	// NOTE: zero or more Subscriptions that are ultimately returned by this
+	// Subscriptions iterator may belong to a different project to t.
+	return &SubscriptionIterator{
+		s: t.s,
+		stringsIterator: stringsIterator{
+			ctx: ctx,
+			fetch: func(ctx context.Context, tok string) (*stringsPage, error) {
 
-	subs := []*SubscriptionHandle{}
-	for _, s := range subNames {
-		subs = append(subs, &SubscriptionHandle{c: t.c, name: s})
+				return t.s.listTopicSubscriptions(ctx, t.name, tok)
+			},
+		},
 	}
-	return subs, nil
 }
 
 // Publish publishes the supplied Messages to the topic.
 // If successful, the server-assigned message IDs are returned in the same order as the supplied Messages.
 // At most MaxPublishBatchSize messages may be supplied.
-func (t *TopicHandle) Publish(ctx context.Context, msgs ...*Message) ([]string, error) {
+func (t *Topic) Publish(ctx context.Context, msgs ...*Message) ([]string, error) {
 	if len(msgs) == 0 {
 		return nil, nil
 	}
 	if len(msgs) > MaxPublishBatchSize {
 		return nil, fmt.Errorf("pubsub: got %d messages, but maximum batch size is %d", len(msgs), MaxPublishBatchSize)
 	}
-	return t.c.s.publishMessages(ctx, t.name, msgs)
+	return t.s.publishMessages(ctx, t.name, msgs)
 }
diff --git a/go/src/google.golang.org/cloud/pubsub/topic_test.go b/go/src/google.golang.org/cloud/pubsub/topic_test.go
new file mode 100644
index 0000000..911fc80
--- /dev/null
+++ b/go/src/google.golang.org/cloud/pubsub/topic_test.go
@@ -0,0 +1,141 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pubsub
+
+import (
+	"errors"
+	"reflect"
+	"testing"
+
+	"golang.org/x/net/context"
+)
+
+type topicListCall struct {
+	inTok, outTok string
+	topics        []string
+	err           error
+}
+
+type topicListService struct {
+	service
+	calls []topicListCall
+
+	t *testing.T // for error logging.
+}
+
+func (s *topicListService) listProjectTopics(ctx context.Context, projName, pageTok string) (*stringsPage, error) {
+	if len(s.calls) == 0 || projName != "projects/projid" {
+		s.t.Errorf("unexpected call: projName: %q, pageTok: %q", projName, pageTok)
+		return nil, errors.New("bang")
+	}
+
+	call := s.calls[0]
+	s.calls = s.calls[1:]
+	if call.inTok != pageTok {
+		s.t.Errorf("page token: got: %v, want: %v", pageTok, call.inTok)
+	}
+	return &stringsPage{call.topics, call.outTok}, call.err
+}
+
+func checkTopicListing(t *testing.T, calls []topicListCall, want []string) {
+	s := &topicListService{calls: calls, t: t}
+	c := &Client{projectID: "projid", s: s}
+	topics, err := slurpTopics(c.Topics(context.Background()))
+	if err != nil {
+		t.Errorf("error listing topics: %v", err)
+	}
+	got := topicNames(topics)
+	if !reflect.DeepEqual(got, want) {
+		t.Errorf("topic list: got: %v, want: %v", got, want)
+	}
+	if len(s.calls) != 0 {
+		t.Errorf("outstanding calls: %v", s.calls)
+	}
+}
+
+// All returns the remaining topics from this iterator.
+func slurpTopics(it *TopicIterator) ([]*Topic, error) {
+	var topics []*Topic
+	for {
+		switch topic, err := it.Next(); err {
+		case nil:
+			topics = append(topics, topic)
+		case Done:
+			return topics, nil
+		default:
+			return nil, err
+		}
+	}
+}
+
+func TestListTopics(t *testing.T) {
+	calls := []topicListCall{
+		{
+			topics: []string{"t1", "t2"},
+			outTok: "a",
+		},
+		{
+			inTok:  "a",
+			topics: []string{"t3"},
+			outTok: "b",
+		},
+		{
+			inTok:  "b",
+			topics: []string{},
+			outTok: "c",
+		},
+		{
+			inTok:  "c",
+			topics: []string{"t4"},
+			outTok: "",
+		},
+	}
+	checkTopicListing(t, calls, []string{"t1", "t2", "t3", "t4"})
+}
+
+func TestListCompletelyEmptyTopics(t *testing.T) {
+	calls := []topicListCall{
+		{
+			outTok: "",
+		},
+	}
+	var want []string
+	checkTopicListing(t, calls, want)
+}
+
+func TestListFinalEmptyPage(t *testing.T) {
+	calls := []topicListCall{
+		{
+			topics: []string{"t1", "t2"},
+			outTok: "a",
+		},
+		{
+			inTok:  "a",
+			topics: []string{},
+			outTok: "",
+		},
+	}
+	checkTopicListing(t, calls, []string{"t1", "t2"})
+}
+
+func topicNames(topics []*Topic) []string {
+	var names []string
+
+	for _, topic := range topics {
+		names = append(names, topic.name)
+
+	}
+	return names
+}
diff --git a/go/src/google.golang.org/cloud/storage/integration_test.go b/go/src/google.golang.org/cloud/storage/integration_test.go
index 42a8c69..cdb6ec8 100644
--- a/go/src/google.golang.org/cloud/storage/integration_test.go
+++ b/go/src/google.golang.org/cloud/storage/integration_test.go
@@ -33,6 +33,7 @@
 
 	"golang.org/x/net/context"
 
+	"google.golang.org/api/googleapi"
 	"google.golang.org/cloud"
 	"google.golang.org/cloud/internal/testutil"
 )
@@ -317,6 +318,24 @@
 
 	objName := objects[0]
 
+	// Test NewReader googleapi.Error.
+	// Since a 429 or 5xx is hard to cause, we trigger a 416.
+	realLen := len(contents[objName])
+	_, err = bkt.Object(objName).NewRangeReader(ctx, int64(realLen*2), 10)
+	if err, ok := err.(*googleapi.Error); !ok {
+		t.Error("NewRangeReader did not return a googleapi.Error")
+	} else {
+		if err.Code != 416 {
+			t.Errorf("Code = %d; want %d", err.Code, 416)
+		}
+		if len(err.Header) == 0 {
+			t.Error("Missing googleapi.Error.Header")
+		}
+		if len(err.Body) == 0 {
+			t.Error("Missing googleapi.Error.Body")
+		}
+	}
+
 	// Test StatObject.
 	o, err := bkt.Object(objName).Attrs(ctx)
 	if err != nil {
@@ -449,6 +468,10 @@
 	if err := bkt.Object(copyName).Delete(ctx); err != nil {
 		t.Errorf("Deletion of %v failed with %v", copyName, err)
 	}
+	// Deleting it a second time should return ErrObjectNotExist.
+	if err := bkt.Object(copyName).Delete(ctx); err != ErrObjectNotExist {
+		t.Errorf("second deletion of %v = %v; want ErrObjectNotExist", copyName, err)
+	}
 	_, err = bkt.Object(copyName).Attrs(ctx)
 	if err != ErrObjectNotExist {
 		t.Errorf("Copy is expected to be deleted, stat errored with %v", err)
@@ -616,6 +639,35 @@
 	}
 }
 
+func TestZeroSizedObject(t *testing.T) {
+	ctx := context.Background()
+	client, bucket := testConfig(ctx, t)
+	defer client.Close()
+
+	obj := client.Bucket(bucket).Object("zero" + suffix)
+
+	// Check writing it works as expected.
+	w := obj.NewWriter(ctx)
+	if err := w.Close(); err != nil {
+		t.Fatalf("Writer.Close: %v", err)
+	}
+	defer obj.Delete(ctx)
+
+	// Check we can read it too.
+	r, err := obj.NewReader(ctx)
+	if err != nil {
+		t.Fatalf("NewReader: %v", err)
+	}
+	defer r.Close()
+	body, err := ioutil.ReadAll(r)
+	if err != nil {
+		t.Fatalf("ioutil.ReadAll: %v", err)
+	}
+	if len(body) != 0 {
+		t.Errorf("Body is %v, want empty []byte{}", body)
+	}
+}
+
 // cleanup deletes any objects in the default bucket which were created
 // during this test run (those with the designated suffix), and any
 // objects whose suffix indicates they were created over an hour ago.
diff --git a/go/src/google.golang.org/cloud/storage/storage.go b/go/src/google.golang.org/cloud/storage/storage.go
index 589db93..8c56220 100644
--- a/go/src/google.golang.org/cloud/storage/storage.go
+++ b/go/src/google.golang.org/cloud/storage/storage.go
@@ -446,7 +446,16 @@
 	if err := applyConds("Delete", o.conds, call); err != nil {
 		return err
 	}
-	return call.Do()
+	err := call.Do()
+	switch e := err.(type) {
+	case nil:
+		return nil
+	case *googleapi.Error:
+		if e.Code == http.StatusNotFound {
+			return ErrObjectNotExist
+		}
+	}
+	return err
 }
 
 // CopyTo copies the object to the given dst.
@@ -520,7 +529,7 @@
 	if err := applyConds("NewReader", o.conds, objectsGetCall{req}); err != nil {
 		return nil, err
 	}
-	if length < 0 {
+	if length < 0 && offset > 0 {
 		req.Header.Set("Range", fmt.Sprintf("bytes=%d-", offset))
 	} else if length > 0 {
 		req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", offset, offset+length-1))
@@ -534,8 +543,13 @@
 		return nil, ErrObjectNotExist
 	}
 	if res.StatusCode < 200 || res.StatusCode > 299 {
+		body, _ := ioutil.ReadAll(res.Body)
 		res.Body.Close()
-		return nil, fmt.Errorf("storage: can't read object %v/%v, status code: %v", o.bucket, o.object, res.Status)
+		return nil, &googleapi.Error{
+			Code:   res.StatusCode,
+			Header: res.Header,
+			Body:   string(body),
+		}
 	}
 	if offset > 0 && length != 0 && res.StatusCode != http.StatusPartialContent {
 		res.Body.Close()
diff --git a/go/src/google.golang.org/grpc/Documentation/grpc-metadata.md b/go/src/google.golang.org/grpc/Documentation/grpc-metadata.md
index b387e88..928f557 100644
--- a/go/src/google.golang.org/grpc/Documentation/grpc-metadata.md
+++ b/go/src/google.golang.org/grpc/Documentation/grpc-metadata.md
@@ -70,7 +70,8 @@
 
 ```go
 func (s *server) SomeRPC(ctx context.Context, in *pb.SomeRequest) (*pb.SomeResponse, err) {
-    md := metadata.FromContext(ctx)
+    md, ok := metadata.FromContext(ctx)
+    // do something with metadata
 }
 ```
 
diff --git a/go/src/google.golang.org/grpc/PATENTS b/go/src/google.golang.org/grpc/PATENTS
index 619f9db..69b4795 100644
--- a/go/src/google.golang.org/grpc/PATENTS
+++ b/go/src/google.golang.org/grpc/PATENTS
@@ -1,22 +1,22 @@
 Additional IP Rights Grant (Patents)
 
 "This implementation" means the copyrightable works distributed by
-Google as part of the GRPC project.
+Google as part of the gRPC project.
 
 Google hereby grants to You a perpetual, worldwide, non-exclusive,
 no-charge, royalty-free, irrevocable (except as stated in this section)
 patent license to make, have made, use, offer to sell, sell, import,
 transfer and otherwise run, modify and propagate the contents of this
-implementation of GRPC, where such license applies only to those patent
+implementation of gRPC, where such license applies only to those patent
 claims, both currently owned or controlled by Google and acquired in
 the future, licensable by Google that are necessarily infringed by this
-implementation of GRPC.  This grant does not include claims that would be
+implementation of gRPC.  This grant does not include claims that would be
 infringed only as a consequence of further modification of this
 implementation.  If you or your agent or exclusive licensee institute or
 order or agree to the institution of patent litigation against any
 entity (including a cross-claim or counterclaim in a lawsuit) alleging
-that this implementation of GRPC or any code incorporated within this
-implementation of GRPC constitutes direct or contributory patent
+that this implementation of gRPC or any code incorporated within this
+implementation of gRPC constitutes direct or contributory patent
 infringement, or inducement of patent infringement, then any patent
-rights granted to you under this License for this implementation of GRPC
+rights granted to you under this License for this implementation of gRPC
 shall terminate as of the date such litigation is filed.
diff --git a/go/src/google.golang.org/grpc/README.google b/go/src/google.golang.org/grpc/README.google
index a4342b0..7118b9c 100644
--- a/go/src/google.golang.org/grpc/README.google
+++ b/go/src/google.golang.org/grpc/README.google
@@ -1,5 +1,5 @@
-URL: https://github.com/grpc/grpc-go/archive/ecd00d52ac82a2cd37e17bf91d9c6ca228b71745.zip
-Version: ecd00d52ac82a2cd37e17bf91d9c6ca228b71745
+URL: https://github.com/grpc/grpc-go/archive/4c2aaab42efd64c253e2eea35d987a74a1c8c20d.zip
+Version: 4c2aaab42efd64c253e2eea35d987a74a1c8c20d
 License: New BSD
 License File: LICENSE
 
@@ -10,6 +10,3 @@
 Add test/dummy.go to workaround "no buildable Go source files" error when building
 google.golang.org/grpc/test package.
 
-
-
-
diff --git a/go/src/google.golang.org/grpc/backoff.go b/go/src/google.golang.org/grpc/backoff.go
new file mode 100644
index 0000000..52f4f10
--- /dev/null
+++ b/go/src/google.golang.org/grpc/backoff.go
@@ -0,0 +1,80 @@
+package grpc
+
+import (
+	"math/rand"
+	"time"
+)
+
+// DefaultBackoffConfig uses values specified for backoff in
+// https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md.
+var (
+	DefaultBackoffConfig = BackoffConfig{
+		MaxDelay:  120 * time.Second,
+		baseDelay: 1.0 * time.Second,
+		factor:    1.6,
+		jitter:    0.2,
+	}
+)
+
+// backoffStrategy defines the methodology for backing off after a grpc
+// connection failure.
+//
+// This is unexported until the gRPC project decides whether or not to allow
+// alternative backoff strategies. Once a decision is made, this type and its
+// method may be exported.
+type backoffStrategy interface {
+	// backoff returns the amount of time to wait before the next retry given
+	// the number of consecutive failures.
+	backoff(retries int) time.Duration
+}
+
+// BackoffConfig defines the parameters for the default gRPC backoff strategy.
+type BackoffConfig struct {
+	// MaxDelay is the upper bound of backoff delay.
+	MaxDelay time.Duration
+
+	// TODO(stevvooe): The following fields are not exported, as allowing
+	// changes would violate the current gRPC specification for backoff. If
+	// gRPC decides to allow more interesting backoff strategies, these fields
+	// may be opened up in the future.
+
+	// baseDelay is the amount of time to wait before retrying after the first
+	// failure.
+	baseDelay time.Duration
+
+	// factor is applied to the backoff after each retry.
+	factor float64
+
+	// jitter provides a range to randomize backoff delays.
+	jitter float64
+}
+
+func setDefaults(bc *BackoffConfig) {
+	md := bc.MaxDelay
+	*bc = DefaultBackoffConfig
+
+	if md > 0 {
+		bc.MaxDelay = md
+	}
+}
+
+func (bc BackoffConfig) backoff(retries int) (t time.Duration) {
+	if retries == 0 {
+		return bc.baseDelay
+	}
+	backoff, max := float64(bc.baseDelay), float64(bc.MaxDelay)
+	for backoff < max && retries > 0 {
+		backoff *= bc.factor
+		retries--
+	}
+	if backoff > max {
+		backoff = max
+	}
+	// Randomize backoff delays so that if a cluster of requests start at
+	// the same time, they won't operate in lockstep.
+	backoff *= 1 + bc.jitter*(rand.Float64()*2-1)
+	if backoff < 0 {
+		return 0
+	}
+	return time.Duration(backoff)
+}
diff --git a/go/src/google.golang.org/grpc/backoff_test.go b/go/src/google.golang.org/grpc/backoff_test.go
new file mode 100644
index 0000000..bfca7b1
--- /dev/null
+++ b/go/src/google.golang.org/grpc/backoff_test.go
@@ -0,0 +1,11 @@
+package grpc
+
+import "testing"
+
+func TestBackoffConfigDefaults(t *testing.T) {
+	b := BackoffConfig{}
+	setDefaults(&b)
+	if b != DefaultBackoffConfig {
+		t.Fatalf("expected BackoffConfig to pickup default parameters: %v != %v", b, DefaultBackoffConfig)
+	}
+}
diff --git a/go/src/google.golang.org/grpc/balancer.go b/go/src/google.golang.org/grpc/balancer.go
new file mode 100644
index 0000000..348bf97
--- /dev/null
+++ b/go/src/google.golang.org/grpc/balancer.go
@@ -0,0 +1,340 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+	"fmt"
+	"sync"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc/grpclog"
+	"google.golang.org/grpc/naming"
+	"google.golang.org/grpc/transport"
+)
+
+// Address represents a server the client connects to.
+// This is the EXPERIMENTAL API and may be changed or extended in the future.
+type Address struct {
+	// Addr is the server address on which a connection will be established.
+	Addr string
+	// Metadata is the information associated with Addr, which may be used
+	// to make load balancing decision.
+	Metadata interface{}
+}
+
+// BalancerGetOptions configures a Get call.
+// This is the EXPERIMENTAL API and may be changed or extended in the future.
+type BalancerGetOptions struct {
+	// BlockingWait specifies whether Get should block when there is no
+	// connected address.
+	BlockingWait bool
+}
+
+// Balancer chooses network addresses for RPCs.
+// This is the EXPERIMENTAL API and may be changed or extended in the future.
+type Balancer interface {
+	// Start does the initialization work to bootstrap a Balancer. For example,
+	// this function may start the name resolution and watch the updates. It will
+	// be called when dialing.
+	Start(target string) error
+	// Up informs the Balancer that gRPC has a connection to the server at
+	// addr. It returns down which is called once the connection to addr gets
+	// lost or closed.
+	// TODO: It is not clear how to construct and take advantage the meaningful error
+	// parameter for down. Need realistic demands to guide.
+	Up(addr Address) (down func(error))
+	// Get gets the address of a server for the RPC corresponding to ctx.
+	// i) If it returns a connected address, gRPC internals issues the RPC on the
+	// connection to this address;
+	// ii) If it returns an address on which the connection is under construction
+	// (initiated by Notify(...)) but not connected, gRPC internals
+	//  * fails RPC if the RPC is fail-fast and connection is in the TransientFailure or
+	//  Shutdown state;
+	//  or
+	//  * issues RPC on the connection otherwise.
+	// iii) If it returns an address on which the connection does not exist, gRPC
+	// internals treats it as an error and will fail the corresponding RPC.
+	//
+	// Therefore, the following is the recommended rule when writing a custom Balancer.
+	// If opts.BlockingWait is true, it should return a connected address or
+	// block if there is no connected address. It should respect the timeout or
+	// cancellation of ctx when blocking. If opts.BlockingWait is false (for fail-fast
+	// RPCs), it should return an address it has notified via Notify(...) immediately
+	// instead of blocking.
+	//
+	// The function returns put which is called once the rpc has completed or failed.
+	// put can collect and report RPC stats to a remote load balancer. gRPC internals
+	// will try to call this again if err is non-nil (unless err is ErrClientConnClosing).
+	//
+	// TODO: Add other non-recoverable errors?
+	Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error)
+	// Notify returns a channel that is used by gRPC internals to watch the addresses
+	// gRPC needs to connect. The addresses might be from a name resolver or remote
+	// load balancer. gRPC internals will compare it with the existing connected
+	// addresses. If the address Balancer notified is not in the existing connected
+	// addresses, gRPC starts to connect the address. If an address in the existing
+	// connected addresses is not in the notification list, the corresponding connection
+	// is shutdown gracefully. Otherwise, there are no operations to take. Note that
+	// the Address slice must be the full list of the Addresses which should be connected.
+	// It is NOT delta.
+	Notify() <-chan []Address
+	// Close shuts down the balancer.
+	Close() error
+}
+
+// downErr implements net.Error. It is constructed by gRPC internals and passed to the down
+// call of Balancer.
+type downErr struct {
+	timeout   bool
+	temporary bool
+	desc      string
+}
+
+func (e downErr) Error() string   { return e.desc }
+func (e downErr) Timeout() bool   { return e.timeout }
+func (e downErr) Temporary() bool { return e.temporary }
+
+func downErrorf(timeout, temporary bool, format string, a ...interface{}) downErr {
+	return downErr{
+		timeout:   timeout,
+		temporary: temporary,
+		desc:      fmt.Sprintf(format, a...),
+	}
+}
+
+// RoundRobin returns a Balancer that selects addresses round-robin. It uses r to watch
+// the name resolution updates and updates the addresses available correspondingly.
+func RoundRobin(r naming.Resolver) Balancer {
+	return &roundRobin{r: r}
+}
+
+type roundRobin struct {
+	r         naming.Resolver
+	w         naming.Watcher
+	open      []Address // all the addresses the client should potentially connect
+	mu        sync.Mutex
+	addrCh    chan []Address // the channel to notify gRPC internals the list of addresses the client should connect to.
+	connected []Address      // all the connected addresses
+	next      int            // index of the next address to return for Get()
+	waitCh    chan struct{}  // the channel to block when there is no connected address available
+	done      bool           // The Balancer is closed.
+}
+
+func (rr *roundRobin) watchAddrUpdates() error {
+	updates, err := rr.w.Next()
+	if err != nil {
+		grpclog.Println("grpc: the naming watcher stops working due to %v.", err)
+		return err
+	}
+	rr.mu.Lock()
+	defer rr.mu.Unlock()
+	for _, update := range updates {
+		addr := Address{
+			Addr: update.Addr,
+		}
+		switch update.Op {
+		case naming.Add:
+			var exist bool
+			for _, v := range rr.open {
+				if addr == v {
+					exist = true
+					grpclog.Println("grpc: The name resolver wanted to add an existing address: ", addr)
+					break
+				}
+			}
+			if exist {
+				continue
+			}
+			rr.open = append(rr.open, addr)
+		case naming.Delete:
+			for i, v := range rr.open {
+				if v == addr {
+					copy(rr.open[i:], rr.open[i+1:])
+					rr.open = rr.open[:len(rr.open)-1]
+					break
+				}
+			}
+		default:
+			grpclog.Println("Unknown update.Op ", update.Op)
+		}
+	}
+	// Make a copy of rr.open and write it onto rr.addrCh so that gRPC internals gets notified.
+	open := make([]Address, len(rr.open), len(rr.open))
+	copy(open, rr.open)
+	if rr.done {
+		return ErrClientConnClosing
+	}
+	rr.addrCh <- open
+	return nil
+}
+
+func (rr *roundRobin) Start(target string) error {
+	if rr.r == nil {
+		// If there is no name resolver installed, it is not needed to
+		// do name resolution. In this case, rr.addrCh stays nil.
+		return nil
+	}
+	w, err := rr.r.Resolve(target)
+	if err != nil {
+		return err
+	}
+	rr.w = w
+	rr.addrCh = make(chan []Address)
+	go func() {
+		for {
+			if err := rr.watchAddrUpdates(); err != nil {
+				return
+			}
+		}
+	}()
+	return nil
+}
+
+// Up appends addr to the end of rr.connected and sends notification if there
+// are pending Get() calls.
+func (rr *roundRobin) Up(addr Address) func(error) {
+	rr.mu.Lock()
+	defer rr.mu.Unlock()
+	for _, a := range rr.connected {
+		if a == addr {
+			return nil
+		}
+	}
+	rr.connected = append(rr.connected, addr)
+	if len(rr.connected) == 1 {
+		// addr is only one available. Notify the Get() callers who are blocking.
+		if rr.waitCh != nil {
+			close(rr.waitCh)
+			rr.waitCh = nil
+		}
+	}
+	return func(err error) {
+		rr.down(addr, err)
+	}
+}
+
+// down removes addr from rr.connected and moves the remaining addrs forward.
+func (rr *roundRobin) down(addr Address, err error) {
+	rr.mu.Lock()
+	defer rr.mu.Unlock()
+	for i, a := range rr.connected {
+		if a == addr {
+			copy(rr.connected[i:], rr.connected[i+1:])
+			rr.connected = rr.connected[:len(rr.connected)-1]
+			return
+		}
+	}
+}
+
+// Get returns the next addr in the rotation.
+func (rr *roundRobin) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) {
+	var ch chan struct{}
+	rr.mu.Lock()
+	if rr.done {
+		rr.mu.Unlock()
+		err = ErrClientConnClosing
+		return
+	}
+	if rr.next >= len(rr.connected) {
+		rr.next = 0
+	}
+	if len(rr.connected) > 0 {
+		addr = rr.connected[rr.next]
+		rr.next++
+		rr.mu.Unlock()
+		return
+	}
+	// There is no address available. Wait on rr.waitCh.
+	// TODO(zhaoq): Handle the case when opts.BlockingWait is false.
+	if rr.waitCh == nil {
+		ch = make(chan struct{})
+		rr.waitCh = ch
+	} else {
+		ch = rr.waitCh
+	}
+	rr.mu.Unlock()
+	for {
+		select {
+		case <-ctx.Done():
+			err = transport.ContextErr(ctx.Err())
+			return
+		case <-ch:
+			rr.mu.Lock()
+			if rr.done {
+				rr.mu.Unlock()
+				err = ErrClientConnClosing
+				return
+			}
+			if len(rr.connected) == 0 {
+				// The newly added addr got removed by Down() again.
+				if rr.waitCh == nil {
+					ch = make(chan struct{})
+					rr.waitCh = ch
+				} else {
+					ch = rr.waitCh
+				}
+				rr.mu.Unlock()
+				continue
+			}
+			if rr.next >= len(rr.connected) {
+				rr.next = 0
+			}
+			addr = rr.connected[rr.next]
+			rr.next++
+			rr.mu.Unlock()
+			return
+		}
+	}
+}
+
+func (rr *roundRobin) Notify() <-chan []Address {
+	return rr.addrCh
+}
+
+func (rr *roundRobin) Close() error {
+	rr.mu.Lock()
+	defer rr.mu.Unlock()
+	rr.done = true
+	if rr.w != nil {
+		rr.w.Close()
+	}
+	if rr.waitCh != nil {
+		close(rr.waitCh)
+		rr.waitCh = nil
+	}
+	if rr.addrCh != nil {
+		close(rr.addrCh)
+	}
+	return nil
+}
diff --git a/go/src/google.golang.org/grpc/balancer_test.go b/go/src/google.golang.org/grpc/balancer_test.go
new file mode 100644
index 0000000..9d8d2bc
--- /dev/null
+++ b/go/src/google.golang.org/grpc/balancer_test.go
@@ -0,0 +1,322 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+	"fmt"
+	"math"
+	"sync"
+	"testing"
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/naming"
+)
+
+type testWatcher struct {
+	// the channel to receives name resolution updates
+	update chan *naming.Update
+	// the side channel to get to know how many updates in a batch
+	side chan int
+	// the channel to notifiy update injector that the update reading is done
+	readDone chan int
+}
+
+func (w *testWatcher) Next() (updates []*naming.Update, err error) {
+	n := <-w.side
+	if n == 0 {
+		return nil, fmt.Errorf("w.side is closed")
+	}
+	for i := 0; i < n; i++ {
+		u := <-w.update
+		if u != nil {
+			updates = append(updates, u)
+		}
+	}
+	w.readDone <- 0
+	return
+}
+
+func (w *testWatcher) Close() {
+}
+
+// Inject naming resolution updates to the testWatcher.
+func (w *testWatcher) inject(updates []*naming.Update) {
+	w.side <- len(updates)
+	for _, u := range updates {
+		w.update <- u
+	}
+	<-w.readDone
+}
+
+type testNameResolver struct {
+	w    *testWatcher
+	addr string
+}
+
+func (r *testNameResolver) Resolve(target string) (naming.Watcher, error) {
+	r.w = &testWatcher{
+		update:   make(chan *naming.Update, 1),
+		side:     make(chan int, 1),
+		readDone: make(chan int),
+	}
+	r.w.side <- 1
+	r.w.update <- &naming.Update{
+		Op:   naming.Add,
+		Addr: r.addr,
+	}
+	go func() {
+		<-r.w.readDone
+	}()
+	return r.w, nil
+}
+
+func startServers(t *testing.T, numServers int, maxStreams uint32) ([]*server, *testNameResolver) {
+	var servers []*server
+	for i := 0; i < numServers; i++ {
+		s := newTestServer()
+		servers = append(servers, s)
+		go s.start(t, 0, maxStreams)
+		s.wait(t, 2*time.Second)
+	}
+	// Point to server[0]
+	addr := "127.0.0.1:" + servers[0].port
+	return servers, &testNameResolver{
+		addr: addr,
+	}
+}
+
+func TestNameDiscovery(t *testing.T) {
+	// Start 2 servers on 2 ports.
+	numServers := 2
+	servers, r := startServers(t, numServers, math.MaxUint32)
+	cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
+	if err != nil {
+		t.Fatalf("Failed to create ClientConn: %v", err)
+	}
+	req := "port"
+	var reply string
+	if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[0].port {
+		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port)
+	}
+	// Inject the name resolution change to remove servers[0] and add servers[1].
+	var updates []*naming.Update
+	updates = append(updates, &naming.Update{
+		Op:   naming.Delete,
+		Addr: "127.0.0.1:" + servers[0].port,
+	})
+	updates = append(updates, &naming.Update{
+		Op:   naming.Add,
+		Addr: "127.0.0.1:" + servers[1].port,
+	})
+	r.w.inject(updates)
+	// Loop until the rpcs in flight talks to servers[1].
+	for {
+		if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port {
+			break
+		}
+		time.Sleep(10 * time.Millisecond)
+	}
+	cc.Close()
+	for i := 0; i < numServers; i++ {
+		servers[i].stop()
+	}
+}
+
+func TestEmptyAddrs(t *testing.T) {
+	servers, r := startServers(t, 1, math.MaxUint32)
+	cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
+	if err != nil {
+		t.Fatalf("Failed to create ClientConn: %v", err)
+	}
+	var reply string
+	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse {
+		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, reply = %q, want %q, <nil>", err, reply, expectedResponse)
+	}
+	// Inject name resolution change to remove the server so that there is no address
+	// available after that.
+	u := &naming.Update{
+		Op:   naming.Delete,
+		Addr: "127.0.0.1:" + servers[0].port,
+	}
+	r.w.inject([]*naming.Update{u})
+	// Loop until the above updates apply.
+	for {
+		time.Sleep(10 * time.Millisecond)
+		ctx, _ := context.WithTimeout(context.Background(), 10*time.Millisecond)
+		if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); err != nil {
+			break
+		}
+	}
+	cc.Close()
+	servers[0].stop()
+}
+
+func TestRoundRobin(t *testing.T) {
+	// Start 3 servers on 3 ports.
+	numServers := 3
+	servers, r := startServers(t, numServers, math.MaxUint32)
+	cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
+	if err != nil {
+		t.Fatalf("Failed to create ClientConn: %v", err)
+	}
+	// Add servers[1] to the service discovery.
+	u := &naming.Update{
+		Op:   naming.Add,
+		Addr: "127.0.0.1:" + servers[1].port,
+	}
+	r.w.inject([]*naming.Update{u})
+	req := "port"
+	var reply string
+	// Loop until servers[1] is up
+	for {
+		if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[1].port {
+			break
+		}
+		time.Sleep(10 * time.Millisecond)
+	}
+	// Add server2[2] to the service discovery.
+	u = &naming.Update{
+		Op:   naming.Add,
+		Addr: "127.0.0.1:" + servers[2].port,
+	}
+	r.w.inject([]*naming.Update{u})
+	// Loop until both servers[2] are up.
+	for {
+		if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err != nil && ErrorDesc(err) == servers[2].port {
+			break
+		}
+		time.Sleep(10 * time.Millisecond)
+	}
+	// Check the incoming RPCs served in a round-robin manner.
+	for i := 0; i < 10; i++ {
+		if err := Invoke(context.Background(), "/foo/bar", &req, &reply, cc); err == nil || ErrorDesc(err) != servers[i%numServers].port {
+			t.Fatalf("Index %d: Invoke(_, _, _, _, _) = %v, want %s", i, err, servers[i%numServers].port)
+		}
+	}
+	cc.Close()
+	for i := 0; i < numServers; i++ {
+		servers[i].stop()
+	}
+}
+
+func TestCloseWithPendingRPC(t *testing.T) {
+	servers, r := startServers(t, 1, math.MaxUint32)
+	cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
+	if err != nil {
+		t.Fatalf("Failed to create ClientConn: %v", err)
+	}
+	var reply string
+	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil {
+		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want %s", err, servers[0].port)
+	}
+	// Remove the server.
+	updates := []*naming.Update{&naming.Update{
+		Op:   naming.Delete,
+		Addr: "127.0.0.1:" + servers[0].port,
+	}}
+	r.w.inject(updates)
+	// Loop until the above update applies.
+	for {
+		ctx, _ := context.WithTimeout(context.Background(), 10*time.Millisecond)
+		if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); Code(err) == codes.DeadlineExceeded {
+			break
+		}
+		time.Sleep(10 * time.Millisecond)
+	}
+	// Issue 2 RPCs which should be completed with error status once cc is closed.
+	var wg sync.WaitGroup
+	wg.Add(2)
+	go func() {
+		defer wg.Done()
+		var reply string
+		if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err == nil {
+			t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err)
+		}
+	}()
+	go func() {
+		defer wg.Done()
+		var reply string
+		time.Sleep(5 * time.Millisecond)
+		if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err == nil {
+			t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want not nil", err)
+		}
+	}()
+	time.Sleep(5 * time.Millisecond)
+	cc.Close()
+	wg.Wait()
+	servers[0].stop()
+}
+
+func TestGetOnWaitChannel(t *testing.T) {
+	servers, r := startServers(t, 1, math.MaxUint32)
+	cc, err := Dial("foo.bar.com", WithBalancer(RoundRobin(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
+	if err != nil {
+		t.Fatalf("Failed to create ClientConn: %v", err)
+	}
+	// Remove all servers so that all upcoming RPCs will block on waitCh.
+	updates := []*naming.Update{&naming.Update{
+		Op:   naming.Delete,
+		Addr: "127.0.0.1:" + servers[0].port,
+	}}
+	r.w.inject(updates)
+	for {
+		var reply string
+		ctx, _ := context.WithTimeout(context.Background(), 10*time.Millisecond)
+		if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); Code(err) == codes.DeadlineExceeded {
+			break
+		}
+		time.Sleep(10 * time.Millisecond)
+	}
+	var wg sync.WaitGroup
+	wg.Add(1)
+	go func() {
+		defer wg.Done()
+		var reply string
+		if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil {
+			t.Errorf("grpc.Invoke(_, _, _, _, _) = %v, want <nil>", err)
+		}
+	}()
+	// Add a connected server to get the above RPC through.
+	updates = []*naming.Update{&naming.Update{
+		Op:   naming.Add,
+		Addr: "127.0.0.1:" + servers[0].port,
+	}}
+	r.w.inject(updates)
+	// Wait until the above RPC succeeds.
+	wg.Wait()
+	cc.Close()
+	servers[0].stop()
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/benchmark.go b/go/src/google.golang.org/grpc/benchmark/benchmark.go
index 7215d35..d114327 100644
--- a/go/src/google.golang.org/grpc/benchmark/benchmark.go
+++ b/go/src/google.golang.org/grpc/benchmark/benchmark.go
@@ -37,8 +37,8 @@
 package benchmark
 
 import (
+	"fmt"
 	"io"
-	"math"
 	"net"
 
 	"golang.org/x/net/context"
@@ -74,7 +74,7 @@
 	}, nil
 }
 
-func (s *testServer) StreamingCall(stream testpb.TestService_StreamingCallServer) error {
+func (s *testServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error {
 	for {
 		in, err := stream.Recv()
 		if err == io.EOF {
@@ -92,16 +92,70 @@
 	}
 }
 
-// StartServer starts a gRPC server serving a benchmark service on the given
-// address, which may be something like "localhost:0". It returns its listen
-// address and a function to stop the server.
-func StartServer(addr string) (string, func()) {
-	lis, err := net.Listen("tcp", addr)
+// byteBufServer is a gRPC server that sends and receives byte buffer.
+// The purpose is to benchmark the gRPC performance without protobuf serialization/deserialization overhead.
+type byteBufServer struct {
+	respSize int32
+}
+
+// UnaryCall is an empty function and is not used for benchmark.
+// If bytebuf UnaryCall benchmark is needed later, the function body needs to be updated.
+func (s *byteBufServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) {
+	return &testpb.SimpleResponse{}, nil
+}
+
+func (s *byteBufServer) StreamingCall(stream testpb.BenchmarkService_StreamingCallServer) error {
+	for {
+		var in []byte
+		err := stream.(grpc.ServerStream).RecvMsg(&in)
+		if err == io.EOF {
+			return nil
+		}
+		if err != nil {
+			return err
+		}
+		out := make([]byte, s.respSize)
+		if err := stream.(grpc.ServerStream).SendMsg(&out); err != nil {
+			return err
+		}
+	}
+}
+
+// ServerInfo contains the information to create a gRPC benchmark server.
+type ServerInfo struct {
+	// Addr is the address of the server.
+	Addr string
+
+	// Type is the type of the server.
+	// It should be "protobuf" or "bytebuf".
+	Type string
+
+	// Metadata is an optional configuration.
+	// For "protobuf", it's ignored.
+	// For "bytebuf", it should be an int representing response size.
+	Metadata interface{}
+}
+
+// StartServer starts a gRPC server serving a benchmark service according to info.
+// It returns its listen address and a function to stop the server.
+func StartServer(info ServerInfo, opts ...grpc.ServerOption) (string, func()) {
+	lis, err := net.Listen("tcp", info.Addr)
 	if err != nil {
 		grpclog.Fatalf("Failed to listen: %v", err)
 	}
-	s := grpc.NewServer(grpc.MaxConcurrentStreams(math.MaxUint32))
-	testpb.RegisterTestServiceServer(s, &testServer{})
+	s := grpc.NewServer(opts...)
+	switch info.Type {
+	case "protobuf":
+		testpb.RegisterBenchmarkServiceServer(s, &testServer{})
+	case "bytebuf":
+		respSize, ok := info.Metadata.(int32)
+		if !ok {
+			grpclog.Fatalf("failed to StartServer, invalid metadata: %v, for Type: %v", info.Metadata, info.Type)
+		}
+		testpb.RegisterBenchmarkServiceServer(s, &byteBufServer{respSize: respSize})
+	default:
+		grpclog.Fatalf("failed to StartServer, unknown Type: %v", info.Type)
+	}
 	go s.Serve(lis)
 	return lis.Addr().String(), func() {
 		s.Stop()
@@ -109,7 +163,7 @@
 }
 
 // DoUnaryCall performs an unary RPC with given stub and request and response sizes.
-func DoUnaryCall(tc testpb.TestServiceClient, reqSize, respSize int) {
+func DoUnaryCall(tc testpb.BenchmarkServiceClient, reqSize, respSize int) error {
 	pl := newPayload(testpb.PayloadType_COMPRESSABLE, reqSize)
 	req := &testpb.SimpleRequest{
 		ResponseType: pl.Type,
@@ -117,12 +171,13 @@
 		Payload:      pl,
 	}
 	if _, err := tc.UnaryCall(context.Background(), req); err != nil {
-		grpclog.Fatal("/TestService/UnaryCall RPC failed: ", err)
+		return fmt.Errorf("/BenchmarkService/UnaryCall(_, _) = _, %v, want _, <nil>", err)
 	}
+	return nil
 }
 
 // DoStreamingRoundTrip performs a round trip for a single streaming rpc.
-func DoStreamingRoundTrip(tc testpb.TestServiceClient, stream testpb.TestService_StreamingCallClient, reqSize, respSize int) {
+func DoStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error {
 	pl := newPayload(testpb.PayloadType_COMPRESSABLE, reqSize)
 	req := &testpb.SimpleRequest{
 		ResponseType: pl.Type,
@@ -130,16 +185,38 @@
 		Payload:      pl,
 	}
 	if err := stream.Send(req); err != nil {
-		grpclog.Fatalf("StreamingCall(_).Send: %v", err)
+		return fmt.Errorf("/BenchmarkService/StreamingCall.Send(_) = %v, want <nil>", err)
 	}
 	if _, err := stream.Recv(); err != nil {
-		grpclog.Fatalf("StreamingCall(_).Recv: %v", err)
+		// EOF is a valid error here.
+		if err == io.EOF {
+			return nil
+		}
+		return fmt.Errorf("/BenchmarkService/StreamingCall.Recv(_) = %v, want <nil>", err)
 	}
+	return nil
+}
+
+// DoByteBufStreamingRoundTrip performs a round trip for a single streaming rpc, using a custom codec for byte buffer.
+func DoByteBufStreamingRoundTrip(stream testpb.BenchmarkService_StreamingCallClient, reqSize, respSize int) error {
+	out := make([]byte, reqSize)
+	if err := stream.(grpc.ClientStream).SendMsg(&out); err != nil {
+		return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).SendMsg(_) = %v, want <nil>", err)
+	}
+	var in []byte
+	if err := stream.(grpc.ClientStream).RecvMsg(&in); err != nil {
+		// EOF is a valid error here.
+		if err == io.EOF {
+			return nil
+		}
+		return fmt.Errorf("/BenchmarkService/StreamingCall.(ClientStream).RecvMsg(_) = %v, want <nil>", err)
+	}
+	return nil
 }
 
 // NewClientConn creates a gRPC client connection to addr.
-func NewClientConn(addr string) *grpc.ClientConn {
-	conn, err := grpc.Dial(addr, grpc.WithInsecure())
+func NewClientConn(addr string, opts ...grpc.DialOption) *grpc.ClientConn {
+	conn, err := grpc.Dial(addr, opts...)
 	if err != nil {
 		grpclog.Fatalf("NewClientConn(%q) failed to create a ClientConn %v", addr, err)
 	}
diff --git a/go/src/google.golang.org/grpc/benchmark/benchmark_test.go b/go/src/google.golang.org/grpc/benchmark/benchmark_test.go
index 97779e2..8fe3fa1 100644
--- a/go/src/google.golang.org/grpc/benchmark/benchmark_test.go
+++ b/go/src/google.golang.org/grpc/benchmark/benchmark_test.go
@@ -10,15 +10,16 @@
 	"google.golang.org/grpc"
 	testpb "google.golang.org/grpc/benchmark/grpc_testing"
 	"google.golang.org/grpc/benchmark/stats"
+	"google.golang.org/grpc/grpclog"
 )
 
 func runUnary(b *testing.B, maxConcurrentCalls int) {
 	s := stats.AddStats(b, 38)
 	b.StopTimer()
-	target, stopper := StartServer("localhost:0")
+	target, stopper := StartServer(ServerInfo{Addr: "localhost:0", Type: "protobuf"})
 	defer stopper()
-	conn := NewClientConn(target)
-	tc := testpb.NewTestServiceClient(conn)
+	conn := NewClientConn(target, grpc.WithInsecure())
+	tc := testpb.NewBenchmarkServiceClient(conn)
 
 	// Warm up connection.
 	for i := 0; i < 10; i++ {
@@ -58,10 +59,10 @@
 func runStream(b *testing.B, maxConcurrentCalls int) {
 	s := stats.AddStats(b, 38)
 	b.StopTimer()
-	target, stopper := StartServer("localhost:0")
+	target, stopper := StartServer(ServerInfo{Addr: "localhost:0", Type: "protobuf"})
 	defer stopper()
-	conn := NewClientConn(target)
-	tc := testpb.NewTestServiceClient(conn)
+	conn := NewClientConn(target, grpc.WithInsecure())
+	tc := testpb.NewBenchmarkServiceClient(conn)
 
 	// Warm up connection.
 	stream, err := tc.StreamingCall(context.Background())
@@ -69,7 +70,7 @@
 		b.Fatalf("%v.StreamingCall(_) = _, %v", tc, err)
 	}
 	for i := 0; i < 10; i++ {
-		streamCaller(tc, stream)
+		streamCaller(stream)
 	}
 
 	ch := make(chan int, maxConcurrentCalls*4)
@@ -88,7 +89,7 @@
 			}
 			for range ch {
 				start := time.Now()
-				streamCaller(tc, stream)
+				streamCaller(stream)
 				elapse := time.Since(start)
 				mu.Lock()
 				s.Add(elapse)
@@ -106,12 +107,16 @@
 	wg.Wait()
 	conn.Close()
 }
-func unaryCaller(client testpb.TestServiceClient) {
-	DoUnaryCall(client, 1, 1)
+func unaryCaller(client testpb.BenchmarkServiceClient) {
+	if err := DoUnaryCall(client, 1, 1); err != nil {
+		grpclog.Fatalf("DoUnaryCall failed: %v", err)
+	}
 }
 
-func streamCaller(client testpb.TestServiceClient, stream testpb.TestService_StreamingCallClient) {
-	DoStreamingRoundTrip(client, stream, 1, 1)
+func streamCaller(stream testpb.BenchmarkService_StreamingCallClient) {
+	if err := DoStreamingRoundTrip(stream, 1, 1); err != nil {
+		grpclog.Fatalf("DoStreamingRoundTrip failed: %v", err)
+	}
 }
 
 func BenchmarkClientStreamc1(b *testing.B) {
diff --git a/go/src/google.golang.org/grpc/benchmark/client/main.go b/go/src/google.golang.org/grpc/benchmark/client/main.go
index e7f0a8f..5dfbe6a 100644
--- a/go/src/google.golang.org/grpc/benchmark/client/main.go
+++ b/go/src/google.golang.org/grpc/benchmark/client/main.go
@@ -28,18 +28,18 @@
 		   1 : streaming call.`)
 )
 
-func unaryCaller(client testpb.TestServiceClient) {
+func unaryCaller(client testpb.BenchmarkServiceClient) {
 	benchmark.DoUnaryCall(client, 1, 1)
 }
 
-func streamCaller(client testpb.TestServiceClient, stream testpb.TestService_StreamingCallClient) {
-	benchmark.DoStreamingRoundTrip(client, stream, 1, 1)
+func streamCaller(stream testpb.BenchmarkService_StreamingCallClient) {
+	benchmark.DoStreamingRoundTrip(stream, 1, 1)
 }
 
-func buildConnection() (s *stats.Stats, conn *grpc.ClientConn, tc testpb.TestServiceClient) {
+func buildConnection() (s *stats.Stats, conn *grpc.ClientConn, tc testpb.BenchmarkServiceClient) {
 	s = stats.NewStats(256)
 	conn = benchmark.NewClientConn(*server)
-	tc = testpb.NewTestServiceClient(conn)
+	tc = testpb.NewBenchmarkServiceClient(conn)
 	return s, conn, tc
 }
 
@@ -107,11 +107,11 @@
 			}
 			// Do some warm up.
 			for i := 0; i < 100; i++ {
-				streamCaller(tc, stream)
+				streamCaller(stream)
 			}
 			for range ch {
 				start := time.Now()
-				streamCaller(tc, stream)
+				streamCaller(stream)
 				elapse := time.Since(start)
 				mu.Lock()
 				s.Add(elapse)
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.pb.go
new file mode 100644
index 0000000..fe5fe87
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.pb.go
@@ -0,0 +1,973 @@
+// Code generated by protoc-gen-go.
+// source: control.proto
+// DO NOT EDIT!
+
+/*
+Package grpc_testing is a generated protocol buffer package.
+
+It is generated from these files:
+	control.proto
+	messages.proto
+	payloads.proto
+	services.proto
+	stats.proto
+
+It has these top-level messages:
+	PoissonParams
+	UniformParams
+	DeterministicParams
+	ParetoParams
+	ClosedLoopParams
+	LoadParams
+	SecurityParams
+	ClientConfig
+	ClientStatus
+	Mark
+	ClientArgs
+	ServerConfig
+	ServerArgs
+	ServerStatus
+	CoreRequest
+	CoreResponse
+	Void
+	Scenario
+	Scenarios
+	Payload
+	EchoStatus
+	SimpleRequest
+	SimpleResponse
+	StreamingInputCallRequest
+	StreamingInputCallResponse
+	ResponseParameters
+	StreamingOutputCallRequest
+	StreamingOutputCallResponse
+	ReconnectParams
+	ReconnectInfo
+	ByteBufferParams
+	SimpleProtoParams
+	ComplexProtoParams
+	PayloadConfig
+	ServerStats
+	HistogramParams
+	HistogramData
+	ClientStats
+*/
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
+type ClientType int32
+
+const (
+	ClientType_SYNC_CLIENT  ClientType = 0
+	ClientType_ASYNC_CLIENT ClientType = 1
+)
+
+var ClientType_name = map[int32]string{
+	0: "SYNC_CLIENT",
+	1: "ASYNC_CLIENT",
+}
+var ClientType_value = map[string]int32{
+	"SYNC_CLIENT":  0,
+	"ASYNC_CLIENT": 1,
+}
+
+func (x ClientType) String() string {
+	return proto.EnumName(ClientType_name, int32(x))
+}
+func (ClientType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+type ServerType int32
+
+const (
+	ServerType_SYNC_SERVER          ServerType = 0
+	ServerType_ASYNC_SERVER         ServerType = 1
+	ServerType_ASYNC_GENERIC_SERVER ServerType = 2
+)
+
+var ServerType_name = map[int32]string{
+	0: "SYNC_SERVER",
+	1: "ASYNC_SERVER",
+	2: "ASYNC_GENERIC_SERVER",
+}
+var ServerType_value = map[string]int32{
+	"SYNC_SERVER":          0,
+	"ASYNC_SERVER":         1,
+	"ASYNC_GENERIC_SERVER": 2,
+}
+
+func (x ServerType) String() string {
+	return proto.EnumName(ServerType_name, int32(x))
+}
+func (ServerType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+type RpcType int32
+
+const (
+	RpcType_UNARY     RpcType = 0
+	RpcType_STREAMING RpcType = 1
+)
+
+var RpcType_name = map[int32]string{
+	0: "UNARY",
+	1: "STREAMING",
+}
+var RpcType_value = map[string]int32{
+	"UNARY":     0,
+	"STREAMING": 1,
+}
+
+func (x RpcType) String() string {
+	return proto.EnumName(RpcType_name, int32(x))
+}
+func (RpcType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+
+// Parameters of poisson process distribution, which is a good representation
+// of activity coming in from independent identical stationary sources.
+type PoissonParams struct {
+	// The rate of arrivals (a.k.a. lambda parameter of the exp distribution).
+	OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad" json:"offered_load,omitempty"`
+}
+
+func (m *PoissonParams) Reset()                    { *m = PoissonParams{} }
+func (m *PoissonParams) String() string            { return proto.CompactTextString(m) }
+func (*PoissonParams) ProtoMessage()               {}
+func (*PoissonParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+type UniformParams struct {
+	InterarrivalLo float64 `protobuf:"fixed64,1,opt,name=interarrival_lo,json=interarrivalLo" json:"interarrival_lo,omitempty"`
+	InterarrivalHi float64 `protobuf:"fixed64,2,opt,name=interarrival_hi,json=interarrivalHi" json:"interarrival_hi,omitempty"`
+}
+
+func (m *UniformParams) Reset()                    { *m = UniformParams{} }
+func (m *UniformParams) String() string            { return proto.CompactTextString(m) }
+func (*UniformParams) ProtoMessage()               {}
+func (*UniformParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+type DeterministicParams struct {
+	OfferedLoad float64 `protobuf:"fixed64,1,opt,name=offered_load,json=offeredLoad" json:"offered_load,omitempty"`
+}
+
+func (m *DeterministicParams) Reset()                    { *m = DeterministicParams{} }
+func (m *DeterministicParams) String() string            { return proto.CompactTextString(m) }
+func (*DeterministicParams) ProtoMessage()               {}
+func (*DeterministicParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+
+type ParetoParams struct {
+	InterarrivalBase float64 `protobuf:"fixed64,1,opt,name=interarrival_base,json=interarrivalBase" json:"interarrival_base,omitempty"`
+	Alpha            float64 `protobuf:"fixed64,2,opt,name=alpha" json:"alpha,omitempty"`
+}
+
+func (m *ParetoParams) Reset()                    { *m = ParetoParams{} }
+func (m *ParetoParams) String() string            { return proto.CompactTextString(m) }
+func (*ParetoParams) ProtoMessage()               {}
+func (*ParetoParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
+
+// Once an RPC finishes, immediately start a new one.
+// No configuration parameters needed.
+type ClosedLoopParams struct {
+}
+
+func (m *ClosedLoopParams) Reset()                    { *m = ClosedLoopParams{} }
+func (m *ClosedLoopParams) String() string            { return proto.CompactTextString(m) }
+func (*ClosedLoopParams) ProtoMessage()               {}
+func (*ClosedLoopParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
+
+type LoadParams struct {
+	// Types that are valid to be assigned to Load:
+	//	*LoadParams_ClosedLoop
+	//	*LoadParams_Poisson
+	//	*LoadParams_Uniform
+	//	*LoadParams_Determ
+	//	*LoadParams_Pareto
+	Load isLoadParams_Load `protobuf_oneof:"load"`
+}
+
+func (m *LoadParams) Reset()                    { *m = LoadParams{} }
+func (m *LoadParams) String() string            { return proto.CompactTextString(m) }
+func (*LoadParams) ProtoMessage()               {}
+func (*LoadParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+
+type isLoadParams_Load interface {
+	isLoadParams_Load()
+}
+
+type LoadParams_ClosedLoop struct {
+	ClosedLoop *ClosedLoopParams `protobuf:"bytes,1,opt,name=closed_loop,json=closedLoop,oneof"`
+}
+type LoadParams_Poisson struct {
+	Poisson *PoissonParams `protobuf:"bytes,2,opt,name=poisson,oneof"`
+}
+type LoadParams_Uniform struct {
+	Uniform *UniformParams `protobuf:"bytes,3,opt,name=uniform,oneof"`
+}
+type LoadParams_Determ struct {
+	Determ *DeterministicParams `protobuf:"bytes,4,opt,name=determ,oneof"`
+}
+type LoadParams_Pareto struct {
+	Pareto *ParetoParams `protobuf:"bytes,5,opt,name=pareto,oneof"`
+}
+
+func (*LoadParams_ClosedLoop) isLoadParams_Load() {}
+func (*LoadParams_Poisson) isLoadParams_Load()    {}
+func (*LoadParams_Uniform) isLoadParams_Load()    {}
+func (*LoadParams_Determ) isLoadParams_Load()     {}
+func (*LoadParams_Pareto) isLoadParams_Load()     {}
+
+func (m *LoadParams) GetLoad() isLoadParams_Load {
+	if m != nil {
+		return m.Load
+	}
+	return nil
+}
+
+func (m *LoadParams) GetClosedLoop() *ClosedLoopParams {
+	if x, ok := m.GetLoad().(*LoadParams_ClosedLoop); ok {
+		return x.ClosedLoop
+	}
+	return nil
+}
+
+func (m *LoadParams) GetPoisson() *PoissonParams {
+	if x, ok := m.GetLoad().(*LoadParams_Poisson); ok {
+		return x.Poisson
+	}
+	return nil
+}
+
+func (m *LoadParams) GetUniform() *UniformParams {
+	if x, ok := m.GetLoad().(*LoadParams_Uniform); ok {
+		return x.Uniform
+	}
+	return nil
+}
+
+func (m *LoadParams) GetDeterm() *DeterministicParams {
+	if x, ok := m.GetLoad().(*LoadParams_Determ); ok {
+		return x.Determ
+	}
+	return nil
+}
+
+func (m *LoadParams) GetPareto() *ParetoParams {
+	if x, ok := m.GetLoad().(*LoadParams_Pareto); ok {
+		return x.Pareto
+	}
+	return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*LoadParams) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _LoadParams_OneofMarshaler, _LoadParams_OneofUnmarshaler, _LoadParams_OneofSizer, []interface{}{
+		(*LoadParams_ClosedLoop)(nil),
+		(*LoadParams_Poisson)(nil),
+		(*LoadParams_Uniform)(nil),
+		(*LoadParams_Determ)(nil),
+		(*LoadParams_Pareto)(nil),
+	}
+}
+
+func _LoadParams_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*LoadParams)
+	// load
+	switch x := m.Load.(type) {
+	case *LoadParams_ClosedLoop:
+		b.EncodeVarint(1<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.ClosedLoop); err != nil {
+			return err
+		}
+	case *LoadParams_Poisson:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Poisson); err != nil {
+			return err
+		}
+	case *LoadParams_Uniform:
+		b.EncodeVarint(3<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Uniform); err != nil {
+			return err
+		}
+	case *LoadParams_Determ:
+		b.EncodeVarint(4<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Determ); err != nil {
+			return err
+		}
+	case *LoadParams_Pareto:
+		b.EncodeVarint(5<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Pareto); err != nil {
+			return err
+		}
+	case nil:
+	default:
+		return fmt.Errorf("LoadParams.Load has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _LoadParams_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*LoadParams)
+	switch tag {
+	case 1: // load.closed_loop
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ClosedLoopParams)
+		err := b.DecodeMessage(msg)
+		m.Load = &LoadParams_ClosedLoop{msg}
+		return true, err
+	case 2: // load.poisson
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(PoissonParams)
+		err := b.DecodeMessage(msg)
+		m.Load = &LoadParams_Poisson{msg}
+		return true, err
+	case 3: // load.uniform
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(UniformParams)
+		err := b.DecodeMessage(msg)
+		m.Load = &LoadParams_Uniform{msg}
+		return true, err
+	case 4: // load.determ
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(DeterministicParams)
+		err := b.DecodeMessage(msg)
+		m.Load = &LoadParams_Determ{msg}
+		return true, err
+	case 5: // load.pareto
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ParetoParams)
+		err := b.DecodeMessage(msg)
+		m.Load = &LoadParams_Pareto{msg}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _LoadParams_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*LoadParams)
+	// load
+	switch x := m.Load.(type) {
+	case *LoadParams_ClosedLoop:
+		s := proto.Size(x.ClosedLoop)
+		n += proto.SizeVarint(1<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *LoadParams_Poisson:
+		s := proto.Size(x.Poisson)
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *LoadParams_Uniform:
+		s := proto.Size(x.Uniform)
+		n += proto.SizeVarint(3<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *LoadParams_Determ:
+		s := proto.Size(x.Determ)
+		n += proto.SizeVarint(4<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *LoadParams_Pareto:
+		s := proto.Size(x.Pareto)
+		n += proto.SizeVarint(5<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+// presence of SecurityParams implies use of TLS
+type SecurityParams struct {
+	UseTestCa          bool   `protobuf:"varint,1,opt,name=use_test_ca,json=useTestCa" json:"use_test_ca,omitempty"`
+	ServerHostOverride string `protobuf:"bytes,2,opt,name=server_host_override,json=serverHostOverride" json:"server_host_override,omitempty"`
+}
+
+func (m *SecurityParams) Reset()                    { *m = SecurityParams{} }
+func (m *SecurityParams) String() string            { return proto.CompactTextString(m) }
+func (*SecurityParams) ProtoMessage()               {}
+func (*SecurityParams) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+
+type ClientConfig struct {
+	// List of targets to connect to. At least one target needs to be specified.
+	ServerTargets  []string        `protobuf:"bytes,1,rep,name=server_targets,json=serverTargets" json:"server_targets,omitempty"`
+	ClientType     ClientType      `protobuf:"varint,2,opt,name=client_type,json=clientType,enum=grpc.testing.ClientType" json:"client_type,omitempty"`
+	SecurityParams *SecurityParams `protobuf:"bytes,3,opt,name=security_params,json=securityParams" json:"security_params,omitempty"`
+	// How many concurrent RPCs to start for each channel.
+	// For synchronous client, use a separate thread for each outstanding RPC.
+	OutstandingRpcsPerChannel int32 `protobuf:"varint,4,opt,name=outstanding_rpcs_per_channel,json=outstandingRpcsPerChannel" json:"outstanding_rpcs_per_channel,omitempty"`
+	// Number of independent client channels to create.
+	// i-th channel will connect to server_target[i % server_targets.size()]
+	ClientChannels int32 `protobuf:"varint,5,opt,name=client_channels,json=clientChannels" json:"client_channels,omitempty"`
+	// Only for async client. Number of threads to use to start/manage RPCs.
+	AsyncClientThreads int32   `protobuf:"varint,7,opt,name=async_client_threads,json=asyncClientThreads" json:"async_client_threads,omitempty"`
+	RpcType            RpcType `protobuf:"varint,8,opt,name=rpc_type,json=rpcType,enum=grpc.testing.RpcType" json:"rpc_type,omitempty"`
+	// The requested load for the entire client (aggregated over all the threads).
+	LoadParams      *LoadParams      `protobuf:"bytes,10,opt,name=load_params,json=loadParams" json:"load_params,omitempty"`
+	PayloadConfig   *PayloadConfig   `protobuf:"bytes,11,opt,name=payload_config,json=payloadConfig" json:"payload_config,omitempty"`
+	HistogramParams *HistogramParams `protobuf:"bytes,12,opt,name=histogram_params,json=histogramParams" json:"histogram_params,omitempty"`
+	// Specify the cores we should run the client on, if desired
+	CoreList  []int32 `protobuf:"varint,13,rep,name=core_list,json=coreList" json:"core_list,omitempty"`
+	CoreLimit int32   `protobuf:"varint,14,opt,name=core_limit,json=coreLimit" json:"core_limit,omitempty"`
+}
+
+func (m *ClientConfig) Reset()                    { *m = ClientConfig{} }
+func (m *ClientConfig) String() string            { return proto.CompactTextString(m) }
+func (*ClientConfig) ProtoMessage()               {}
+func (*ClientConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+
+func (m *ClientConfig) GetSecurityParams() *SecurityParams {
+	if m != nil {
+		return m.SecurityParams
+	}
+	return nil
+}
+
+func (m *ClientConfig) GetLoadParams() *LoadParams {
+	if m != nil {
+		return m.LoadParams
+	}
+	return nil
+}
+
+func (m *ClientConfig) GetPayloadConfig() *PayloadConfig {
+	if m != nil {
+		return m.PayloadConfig
+	}
+	return nil
+}
+
+func (m *ClientConfig) GetHistogramParams() *HistogramParams {
+	if m != nil {
+		return m.HistogramParams
+	}
+	return nil
+}
+
+type ClientStatus struct {
+	Stats *ClientStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"`
+}
+
+func (m *ClientStatus) Reset()                    { *m = ClientStatus{} }
+func (m *ClientStatus) String() string            { return proto.CompactTextString(m) }
+func (*ClientStatus) ProtoMessage()               {}
+func (*ClientStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
+
+func (m *ClientStatus) GetStats() *ClientStats {
+	if m != nil {
+		return m.Stats
+	}
+	return nil
+}
+
+// Request current stats
+type Mark struct {
+	// if true, the stats will be reset after taking their snapshot.
+	Reset_ bool `protobuf:"varint,1,opt,name=reset" json:"reset,omitempty"`
+}
+
+func (m *Mark) Reset()                    { *m = Mark{} }
+func (m *Mark) String() string            { return proto.CompactTextString(m) }
+func (*Mark) ProtoMessage()               {}
+func (*Mark) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
+
+type ClientArgs struct {
+	// Types that are valid to be assigned to Argtype:
+	//	*ClientArgs_Setup
+	//	*ClientArgs_Mark
+	Argtype isClientArgs_Argtype `protobuf_oneof:"argtype"`
+}
+
+func (m *ClientArgs) Reset()                    { *m = ClientArgs{} }
+func (m *ClientArgs) String() string            { return proto.CompactTextString(m) }
+func (*ClientArgs) ProtoMessage()               {}
+func (*ClientArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
+
+type isClientArgs_Argtype interface {
+	isClientArgs_Argtype()
+}
+
+type ClientArgs_Setup struct {
+	Setup *ClientConfig `protobuf:"bytes,1,opt,name=setup,oneof"`
+}
+type ClientArgs_Mark struct {
+	Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"`
+}
+
+func (*ClientArgs_Setup) isClientArgs_Argtype() {}
+func (*ClientArgs_Mark) isClientArgs_Argtype()  {}
+
+func (m *ClientArgs) GetArgtype() isClientArgs_Argtype {
+	if m != nil {
+		return m.Argtype
+	}
+	return nil
+}
+
+func (m *ClientArgs) GetSetup() *ClientConfig {
+	if x, ok := m.GetArgtype().(*ClientArgs_Setup); ok {
+		return x.Setup
+	}
+	return nil
+}
+
+func (m *ClientArgs) GetMark() *Mark {
+	if x, ok := m.GetArgtype().(*ClientArgs_Mark); ok {
+		return x.Mark
+	}
+	return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*ClientArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _ClientArgs_OneofMarshaler, _ClientArgs_OneofUnmarshaler, _ClientArgs_OneofSizer, []interface{}{
+		(*ClientArgs_Setup)(nil),
+		(*ClientArgs_Mark)(nil),
+	}
+}
+
+func _ClientArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*ClientArgs)
+	// argtype
+	switch x := m.Argtype.(type) {
+	case *ClientArgs_Setup:
+		b.EncodeVarint(1<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Setup); err != nil {
+			return err
+		}
+	case *ClientArgs_Mark:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Mark); err != nil {
+			return err
+		}
+	case nil:
+	default:
+		return fmt.Errorf("ClientArgs.Argtype has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _ClientArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*ClientArgs)
+	switch tag {
+	case 1: // argtype.setup
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ClientConfig)
+		err := b.DecodeMessage(msg)
+		m.Argtype = &ClientArgs_Setup{msg}
+		return true, err
+	case 2: // argtype.mark
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(Mark)
+		err := b.DecodeMessage(msg)
+		m.Argtype = &ClientArgs_Mark{msg}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _ClientArgs_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*ClientArgs)
+	// argtype
+	switch x := m.Argtype.(type) {
+	case *ClientArgs_Setup:
+		s := proto.Size(x.Setup)
+		n += proto.SizeVarint(1<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *ClientArgs_Mark:
+		s := proto.Size(x.Mark)
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+type ServerConfig struct {
+	ServerType     ServerType      `protobuf:"varint,1,opt,name=server_type,json=serverType,enum=grpc.testing.ServerType" json:"server_type,omitempty"`
+	SecurityParams *SecurityParams `protobuf:"bytes,2,opt,name=security_params,json=securityParams" json:"security_params,omitempty"`
+	// Port on which to listen. Zero means pick unused port.
+	Port int32 `protobuf:"varint,4,opt,name=port" json:"port,omitempty"`
+	// Only for async server. Number of threads used to serve the requests.
+	AsyncServerThreads int32 `protobuf:"varint,7,opt,name=async_server_threads,json=asyncServerThreads" json:"async_server_threads,omitempty"`
+	// Specify the number of cores to limit server to, if desired
+	CoreLimit int32 `protobuf:"varint,8,opt,name=core_limit,json=coreLimit" json:"core_limit,omitempty"`
+	// payload config, used in generic server
+	PayloadConfig *PayloadConfig `protobuf:"bytes,9,opt,name=payload_config,json=payloadConfig" json:"payload_config,omitempty"`
+	// Specify the cores we should run the server on, if desired
+	CoreList []int32 `protobuf:"varint,10,rep,name=core_list,json=coreList" json:"core_list,omitempty"`
+}
+
+func (m *ServerConfig) Reset()                    { *m = ServerConfig{} }
+func (m *ServerConfig) String() string            { return proto.CompactTextString(m) }
+func (*ServerConfig) ProtoMessage()               {}
+func (*ServerConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
+
+func (m *ServerConfig) GetSecurityParams() *SecurityParams {
+	if m != nil {
+		return m.SecurityParams
+	}
+	return nil
+}
+
+func (m *ServerConfig) GetPayloadConfig() *PayloadConfig {
+	if m != nil {
+		return m.PayloadConfig
+	}
+	return nil
+}
+
+type ServerArgs struct {
+	// Types that are valid to be assigned to Argtype:
+	//	*ServerArgs_Setup
+	//	*ServerArgs_Mark
+	Argtype isServerArgs_Argtype `protobuf_oneof:"argtype"`
+}
+
+func (m *ServerArgs) Reset()                    { *m = ServerArgs{} }
+func (m *ServerArgs) String() string            { return proto.CompactTextString(m) }
+func (*ServerArgs) ProtoMessage()               {}
+func (*ServerArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{12} }
+
+type isServerArgs_Argtype interface {
+	isServerArgs_Argtype()
+}
+
+type ServerArgs_Setup struct {
+	Setup *ServerConfig `protobuf:"bytes,1,opt,name=setup,oneof"`
+}
+type ServerArgs_Mark struct {
+	Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"`
+}
+
+func (*ServerArgs_Setup) isServerArgs_Argtype() {}
+func (*ServerArgs_Mark) isServerArgs_Argtype()  {}
+
+func (m *ServerArgs) GetArgtype() isServerArgs_Argtype {
+	if m != nil {
+		return m.Argtype
+	}
+	return nil
+}
+
+func (m *ServerArgs) GetSetup() *ServerConfig {
+	if x, ok := m.GetArgtype().(*ServerArgs_Setup); ok {
+		return x.Setup
+	}
+	return nil
+}
+
+func (m *ServerArgs) GetMark() *Mark {
+	if x, ok := m.GetArgtype().(*ServerArgs_Mark); ok {
+		return x.Mark
+	}
+	return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*ServerArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _ServerArgs_OneofMarshaler, _ServerArgs_OneofUnmarshaler, _ServerArgs_OneofSizer, []interface{}{
+		(*ServerArgs_Setup)(nil),
+		(*ServerArgs_Mark)(nil),
+	}
+}
+
+func _ServerArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*ServerArgs)
+	// argtype
+	switch x := m.Argtype.(type) {
+	case *ServerArgs_Setup:
+		b.EncodeVarint(1<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Setup); err != nil {
+			return err
+		}
+	case *ServerArgs_Mark:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.Mark); err != nil {
+			return err
+		}
+	case nil:
+	default:
+		return fmt.Errorf("ServerArgs.Argtype has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _ServerArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*ServerArgs)
+	switch tag {
+	case 1: // argtype.setup
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ServerConfig)
+		err := b.DecodeMessage(msg)
+		m.Argtype = &ServerArgs_Setup{msg}
+		return true, err
+	case 2: // argtype.mark
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(Mark)
+		err := b.DecodeMessage(msg)
+		m.Argtype = &ServerArgs_Mark{msg}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _ServerArgs_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*ServerArgs)
+	// argtype
+	switch x := m.Argtype.(type) {
+	case *ServerArgs_Setup:
+		s := proto.Size(x.Setup)
+		n += proto.SizeVarint(1<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *ServerArgs_Mark:
+		s := proto.Size(x.Mark)
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+type ServerStatus struct {
+	Stats *ServerStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"`
+	// the port bound by the server
+	Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"`
+	// Number of cores available to the server
+	Cores int32 `protobuf:"varint,3,opt,name=cores" json:"cores,omitempty"`
+}
+
+func (m *ServerStatus) Reset()                    { *m = ServerStatus{} }
+func (m *ServerStatus) String() string            { return proto.CompactTextString(m) }
+func (*ServerStatus) ProtoMessage()               {}
+func (*ServerStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{13} }
+
+func (m *ServerStatus) GetStats() *ServerStats {
+	if m != nil {
+		return m.Stats
+	}
+	return nil
+}
+
+type CoreRequest struct {
+}
+
+func (m *CoreRequest) Reset()                    { *m = CoreRequest{} }
+func (m *CoreRequest) String() string            { return proto.CompactTextString(m) }
+func (*CoreRequest) ProtoMessage()               {}
+func (*CoreRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{14} }
+
+type CoreResponse struct {
+	// Number of cores available on the server
+	Cores int32 `protobuf:"varint,1,opt,name=cores" json:"cores,omitempty"`
+}
+
+func (m *CoreResponse) Reset()                    { *m = CoreResponse{} }
+func (m *CoreResponse) String() string            { return proto.CompactTextString(m) }
+func (*CoreResponse) ProtoMessage()               {}
+func (*CoreResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{15} }
+
+type Void struct {
+}
+
+func (m *Void) Reset()                    { *m = Void{} }
+func (m *Void) String() string            { return proto.CompactTextString(m) }
+func (*Void) ProtoMessage()               {}
+func (*Void) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{16} }
+
+// A single performance scenario: input to qps_json_driver
+type Scenario struct {
+	// Human readable name for this scenario
+	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+	// Client configuration
+	ClientConfig *ClientConfig `protobuf:"bytes,2,opt,name=client_config,json=clientConfig" json:"client_config,omitempty"`
+	// Number of clients to start for the test
+	NumClients int32 `protobuf:"varint,3,opt,name=num_clients,json=numClients" json:"num_clients,omitempty"`
+	// Server configuration
+	ServerConfig *ServerConfig `protobuf:"bytes,4,opt,name=server_config,json=serverConfig" json:"server_config,omitempty"`
+	// Number of servers to start for the test
+	NumServers int32 `protobuf:"varint,5,opt,name=num_servers,json=numServers" json:"num_servers,omitempty"`
+	// Warmup period, in seconds
+	WarmupSeconds int32 `protobuf:"varint,6,opt,name=warmup_seconds,json=warmupSeconds" json:"warmup_seconds,omitempty"`
+	// Benchmark time, in seconds
+	BenchmarkSeconds int32 `protobuf:"varint,7,opt,name=benchmark_seconds,json=benchmarkSeconds" json:"benchmark_seconds,omitempty"`
+	// Number of workers to spawn locally (usually zero)
+	SpawnLocalWorkerCount int32 `protobuf:"varint,8,opt,name=spawn_local_worker_count,json=spawnLocalWorkerCount" json:"spawn_local_worker_count,omitempty"`
+}
+
+func (m *Scenario) Reset()                    { *m = Scenario{} }
+func (m *Scenario) String() string            { return proto.CompactTextString(m) }
+func (*Scenario) ProtoMessage()               {}
+func (*Scenario) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{17} }
+
+func (m *Scenario) GetClientConfig() *ClientConfig {
+	if m != nil {
+		return m.ClientConfig
+	}
+	return nil
+}
+
+func (m *Scenario) GetServerConfig() *ServerConfig {
+	if m != nil {
+		return m.ServerConfig
+	}
+	return nil
+}
+
+// A set of scenarios to be run with qps_json_driver
+type Scenarios struct {
+	Scenarios []*Scenario `protobuf:"bytes,1,rep,name=scenarios" json:"scenarios,omitempty"`
+}
+
+func (m *Scenarios) Reset()                    { *m = Scenarios{} }
+func (m *Scenarios) String() string            { return proto.CompactTextString(m) }
+func (*Scenarios) ProtoMessage()               {}
+func (*Scenarios) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{18} }
+
+func (m *Scenarios) GetScenarios() []*Scenario {
+	if m != nil {
+		return m.Scenarios
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*PoissonParams)(nil), "grpc.testing.PoissonParams")
+	proto.RegisterType((*UniformParams)(nil), "grpc.testing.UniformParams")
+	proto.RegisterType((*DeterministicParams)(nil), "grpc.testing.DeterministicParams")
+	proto.RegisterType((*ParetoParams)(nil), "grpc.testing.ParetoParams")
+	proto.RegisterType((*ClosedLoopParams)(nil), "grpc.testing.ClosedLoopParams")
+	proto.RegisterType((*LoadParams)(nil), "grpc.testing.LoadParams")
+	proto.RegisterType((*SecurityParams)(nil), "grpc.testing.SecurityParams")
+	proto.RegisterType((*ClientConfig)(nil), "grpc.testing.ClientConfig")
+	proto.RegisterType((*ClientStatus)(nil), "grpc.testing.ClientStatus")
+	proto.RegisterType((*Mark)(nil), "grpc.testing.Mark")
+	proto.RegisterType((*ClientArgs)(nil), "grpc.testing.ClientArgs")
+	proto.RegisterType((*ServerConfig)(nil), "grpc.testing.ServerConfig")
+	proto.RegisterType((*ServerArgs)(nil), "grpc.testing.ServerArgs")
+	proto.RegisterType((*ServerStatus)(nil), "grpc.testing.ServerStatus")
+	proto.RegisterType((*CoreRequest)(nil), "grpc.testing.CoreRequest")
+	proto.RegisterType((*CoreResponse)(nil), "grpc.testing.CoreResponse")
+	proto.RegisterType((*Void)(nil), "grpc.testing.Void")
+	proto.RegisterType((*Scenario)(nil), "grpc.testing.Scenario")
+	proto.RegisterType((*Scenarios)(nil), "grpc.testing.Scenarios")
+	proto.RegisterEnum("grpc.testing.ClientType", ClientType_name, ClientType_value)
+	proto.RegisterEnum("grpc.testing.ServerType", ServerType_name, ServerType_value)
+	proto.RegisterEnum("grpc.testing.RpcType", RpcType_name, RpcType_value)
+}
+
+var fileDescriptor0 = []byte{
+	// 1162 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x56, 0xdd, 0x6e, 0xdb, 0x46,
+	0x13, 0x8d, 0x14, 0xc9, 0x96, 0x86, 0x92, 0xac, 0x6f, 0xbf, 0xa4, 0x60, 0x1c, 0x27, 0x6d, 0xd8,
+	0x16, 0x0d, 0x5c, 0xc0, 0x29, 0xd4, 0x02, 0x69, 0xd1, 0x8b, 0x40, 0x56, 0x85, 0xd8, 0x80, 0xe3,
+	0xba, 0x2b, 0x27, 0x45, 0xae, 0x08, 0x9a, 0x5a, 0x4b, 0x44, 0x24, 0x2e, 0xbb, 0x4b, 0xc6, 0xf0,
+	0x2b, 0xf4, 0x99, 0xfa, 0x1c, 0x7d, 0x8d, 0xbe, 0x42, 0x67, 0xff, 0x64, 0x52, 0x11, 0x10, 0xb7,
+	0xbd, 0xe3, 0xce, 0x9c, 0xb3, 0x3b, 0x3b, 0x67, 0x66, 0x96, 0xd0, 0x8d, 0x79, 0x9a, 0x0b, 0xbe,
+	0x38, 0xc8, 0x04, 0xcf, 0x39, 0xe9, 0xcc, 0x44, 0x16, 0x1f, 0xe4, 0x4c, 0xe6, 0x49, 0x3a, 0xdb,
+	0xed, 0x65, 0xd1, 0xf5, 0x82, 0x47, 0x53, 0x69, 0xbc, 0xbb, 0x9e, 0xcc, 0xa3, 0xdc, 0x2e, 0x82,
+	0x01, 0x74, 0xcf, 0x78, 0x22, 0x25, 0x4f, 0xcf, 0x22, 0x11, 0x2d, 0x25, 0x79, 0x02, 0x1d, 0x7e,
+	0x79, 0xc9, 0x04, 0x9b, 0x86, 0x8a, 0xe4, 0xd7, 0x3e, 0xab, 0x3d, 0xad, 0x51, 0xcf, 0xda, 0x4e,
+	0xd0, 0x14, 0x44, 0xd0, 0x7d, 0x9d, 0x26, 0x97, 0x5c, 0x2c, 0x2d, 0xe7, 0x2b, 0xd8, 0x49, 0xd2,
+	0x9c, 0x89, 0x48, 0x88, 0xe4, 0x7d, 0xb4, 0x40, 0xa2, 0xa5, 0xf5, 0xca, 0xe6, 0x13, 0xfe, 0x01,
+	0x70, 0x9e, 0xf8, 0xf5, 0x0f, 0x81, 0x47, 0x49, 0xf0, 0x3d, 0xfc, 0xff, 0x27, 0x86, 0x96, 0x65,
+	0x92, 0x26, 0x78, 0x8b, 0xf8, 0xf6, 0xc1, 0xfd, 0x02, 0x1d, 0x04, 0xb3, 0x9c, 0x5b, 0xca, 0xd7,
+	0xf0, 0xbf, 0xca, 0x91, 0x17, 0x91, 0x64, 0x96, 0xd7, 0x2f, 0x3b, 0x0e, 0xd1, 0x4e, 0xee, 0x41,
+	0x33, 0x5a, 0x64, 0xf3, 0xc8, 0x46, 0x65, 0x16, 0x01, 0x81, 0xfe, 0x68, 0xc1, 0xa5, 0x3a, 0x80,
+	0x67, 0x66, 0xdb, 0xe0, 0x8f, 0x3a, 0x80, 0x3a, 0xcf, 0x9e, 0x32, 0x04, 0x2f, 0xd6, 0x10, 0x8c,
+	0x8b, 0x67, 0x7a, 0x7f, 0x6f, 0xf0, 0xf8, 0xa0, 0xac, 0xc3, 0xc1, 0xfa, 0x1e, 0x47, 0x77, 0x28,
+	0xc4, 0x2b, 0x1b, 0x79, 0x0e, 0xdb, 0x99, 0x51, 0x42, 0x9f, 0xee, 0x0d, 0x1e, 0x56, 0xe9, 0x15,
+	0x99, 0x90, 0xeb, 0xd0, 0x8a, 0x58, 0x18, 0x39, 0xfc, 0xbb, 0x9b, 0x88, 0x15, 0xad, 0x14, 0xd1,
+	0xa2, 0xc9, 0x8f, 0xb0, 0x35, 0xd5, 0x49, 0xf6, 0x1b, 0x9a, 0xf7, 0xa4, 0xca, 0xdb, 0x20, 0x00,
+	0xb2, 0x2d, 0x85, 0x7c, 0x07, 0x5b, 0x99, 0xce, 0xb3, 0xdf, 0xd4, 0xe4, 0xdd, 0xb5, 0x68, 0x4b,
+	0x1a, 0x28, 0x96, 0xc1, 0x1e, 0x6e, 0x41, 0x43, 0x09, 0x17, 0x5c, 0x40, 0x6f, 0xc2, 0xe2, 0x42,
+	0x24, 0xf9, 0xb5, 0xcd, 0xe0, 0x63, 0xf0, 0x0a, 0xc9, 0x42, 0xc5, 0x0f, 0xe3, 0x48, 0x67, 0xb0,
+	0x45, 0xdb, 0x68, 0x3a, 0x47, 0xcb, 0x28, 0x22, 0xdf, 0xc0, 0x3d, 0xc9, 0xc4, 0x7b, 0x26, 0xc2,
+	0x39, 0x47, 0x08, 0xc7, 0x2f, 0x91, 0x4c, 0x99, 0xce, 0x55, 0x9b, 0x12, 0xe3, 0x3b, 0x42, 0xd7,
+	0xcf, 0xd6, 0x13, 0xfc, 0xde, 0x84, 0xce, 0x68, 0x91, 0xb0, 0x34, 0x1f, 0xf1, 0xf4, 0x32, 0x99,
+	0x91, 0x2f, 0xa1, 0x67, 0xb7, 0xc8, 0x23, 0x31, 0x63, 0xb9, 0xc4, 0x53, 0xee, 0x22, 0xb9, 0x6b,
+	0xac, 0xe7, 0xc6, 0x48, 0x7e, 0x50, 0x5a, 0x2a, 0x5a, 0x98, 0x5f, 0x67, 0xe6, 0x80, 0xde, 0xc0,
+	0x5f, 0xd7, 0x52, 0x01, 0xce, 0xd1, 0xaf, 0x34, 0x74, 0xdf, 0x64, 0x0c, 0x3b, 0xd2, 0x5e, 0x2b,
+	0xcc, 0xf4, 0xbd, 0xac, 0x24, 0x7b, 0x55, 0x7a, 0xf5, 0xee, 0xb4, 0x27, 0xab, 0xb9, 0x78, 0x01,
+	0x7b, 0xbc, 0xc8, 0xb1, 0x4d, 0xd3, 0x29, 0xa2, 0x43, 0x64, 0xca, 0x30, 0xc3, 0xb0, 0xe3, 0x79,
+	0x94, 0xa6, 0x6c, 0xa1, 0xe5, 0x6a, 0xd2, 0x07, 0x25, 0x0c, 0x45, 0xc8, 0x19, 0x13, 0x23, 0x03,
+	0x50, 0x7d, 0x66, 0xaf, 0x60, 0x29, 0x52, 0xab, 0xd4, 0xa4, 0x3d, 0x63, 0xb6, 0x38, 0xa9, 0xb2,
+	0x1a, 0xc9, 0xeb, 0x34, 0x0e, 0xdd, 0x8d, 0xe7, 0x82, 0xe1, 0xa4, 0xf0, 0xb7, 0x35, 0x9a, 0x68,
+	0x9f, 0xbd, 0xab, 0xf1, 0x20, 0xa3, 0x85, 0xf1, 0x98, 0xd4, 0xb4, 0x74, 0x6a, 0xee, 0x57, 0xef,
+	0x86, 0xa1, 0xe8, 0xbc, 0x6c, 0x0b, 0xf3, 0xa1, 0xf2, 0xa9, 0x34, 0x77, 0x09, 0x01, 0x9d, 0x90,
+	0xb5, 0x7c, 0xde, 0xb4, 0x12, 0x85, 0xc5, 0x4d, 0x5b, 0x1d, 0x82, 0x1b, 0x5e, 0x61, 0xac, 0x35,
+	0xf4, 0xbd, 0x8d, 0xad, 0x61, 0x30, 0x46, 0x66, 0xda, 0xcd, 0xca, 0x4b, 0x72, 0x04, 0xfd, 0x39,
+	0x96, 0x30, 0x9f, 0xe1, 0x8e, 0x2e, 0x86, 0x8e, 0xde, 0xe5, 0x51, 0x75, 0x97, 0x23, 0x87, 0xb2,
+	0x81, 0xec, 0xcc, 0xab, 0x06, 0xf2, 0x10, 0xda, 0x31, 0x17, 0x2c, 0x5c, 0xa0, 0xdd, 0xef, 0x62,
+	0xe9, 0x34, 0x69, 0x4b, 0x19, 0x4e, 0x70, 0x4d, 0x1e, 0x01, 0x58, 0xe7, 0x32, 0xc9, 0xfd, 0x9e,
+	0xce, 0x5f, 0xdb, 0x78, 0xd1, 0x10, 0xbc, 0x70, 0xb5, 0x38, 0xc1, 0xe1, 0x5b, 0x48, 0xf2, 0x0c,
+	0x9a, 0x7a, 0x0c, 0xdb, 0x51, 0xf1, 0x60, 0x53, 0x79, 0x29, 0xa8, 0xa4, 0x06, 0x17, 0xec, 0x41,
+	0xe3, 0x55, 0x24, 0xde, 0xa9, 0x11, 0x25, 0x98, 0x64, 0xb9, 0xed, 0x10, 0xb3, 0x08, 0x0a, 0x00,
+	0xc3, 0x19, 0x8a, 0x99, 0x24, 0x03, 0xdc, 0x9c, 0xe5, 0x85, 0x9b, 0x43, 0xbb, 0x9b, 0x36, 0x37,
+	0xd9, 0xc1, 0xd6, 0x34, 0x50, 0xf2, 0x14, 0x1a, 0x4b, 0xdc, 0xdf, 0xce, 0x1e, 0x52, 0xa5, 0xa8,
+	0x93, 0x11, 0xaa, 0x11, 0x87, 0x6d, 0xd8, 0xc6, 0x4e, 0x51, 0x05, 0x10, 0xfc, 0x59, 0x87, 0xce,
+	0x44, 0x37, 0x8f, 0x4d, 0x36, 0x6a, 0xed, 0x5a, 0x4c, 0x15, 0x48, 0x6d, 0x53, 0xef, 0x18, 0x82,
+	0xe9, 0x1d, 0xb9, 0xfa, 0xde, 0xd4, 0x3b, 0xf5, 0x7f, 0xd1, 0x3b, 0x04, 0x1a, 0x19, 0x17, 0xb9,
+	0xed, 0x11, 0xfd, 0x7d, 0x53, 0xe5, 0x2e, 0xb6, 0x0d, 0x55, 0x6e, 0xa3, 0xb2, 0x55, 0x5e, 0x55,
+	0xb3, 0xb5, 0xa6, 0xe6, 0x86, 0xba, 0x6c, 0xff, 0xe3, 0xba, 0xac, 0x54, 0x13, 0x54, 0xab, 0x49,
+	0xe9, 0x69, 0x02, 0xba, 0x85, 0x9e, 0x65, 0x01, 0xfe, 0xa3, 0x9e, 0x89, 0x93, 0xf3, 0x56, 0x55,
+	0x7a, 0x03, 0x75, 0x55, 0xba, 0xca, 0x7e, 0xbd, 0x94, 0x7d, 0xac, 0x58, 0x75, 0x2f, 0x33, 0x0a,
+	0x9b, 0xd4, 0x2c, 0x82, 0x2e, 0x78, 0x23, 0xfc, 0xa0, 0xec, 0xb7, 0x02, 0xb7, 0x0b, 0xbe, 0xc0,
+	0xfe, 0xd0, 0x4b, 0x99, 0xf1, 0xd4, 0xbc, 0xc4, 0x86, 0x54, 0x2b, 0x93, 0xf0, 0xf9, 0x78, 0xc3,
+	0x93, 0x69, 0xf0, 0x57, 0x1d, 0x5a, 0x93, 0x98, 0xa5, 0x91, 0x48, 0xb8, 0x3a, 0x33, 0x8d, 0x96,
+	0xa6, 0xd8, 0xda, 0x54, 0x7f, 0xe3, 0x04, 0xed, 0xba, 0x01, 0x68, 0xf4, 0xa9, 0x7f, 0xac, 0x13,
+	0x68, 0x27, 0x2e, 0xbf, 0x15, 0x9f, 0x82, 0x97, 0x16, 0x4b, 0x3b, 0x16, 0x5d, 0xe8, 0x80, 0x26,
+	0xc3, 0x51, 0x33, 0xda, 0x3e, 0x1b, 0xee, 0x84, 0xc6, 0xc7, 0xb4, 0xa1, 0x1d, 0x59, 0x6e, 0x15,
+	0x7b, 0x82, 0xb1, 0xb9, 0xf9, 0xac, 0x4e, 0x30, 0x1c, 0xa9, 0x9e, 0xab, 0xab, 0x48, 0x2c, 0x8b,
+	0x0c, 0x31, 0x78, 0x06, 0xd6, 0xeb, 0x96, 0xc6, 0x74, 0x8d, 0x75, 0x62, 0x8c, 0xea, 0x07, 0xe7,
+	0x82, 0xa5, 0xf1, 0x5c, 0x69, 0xb9, 0x42, 0x9a, 0xca, 0xee, 0xaf, 0x1c, 0x0e, 0xfc, 0x1c, 0x7c,
+	0x99, 0x45, 0x57, 0x29, 0xfe, 0xa6, 0xc4, 0xf8, 0x33, 0x74, 0xc5, 0xc5, 0x3b, 0x7d, 0x83, 0x22,
+	0x75, 0x55, 0x7e, 0x5f, 0xfb, 0x4f, 0x94, 0xfb, 0x57, 0xed, 0x1d, 0x29, 0x67, 0x30, 0x84, 0xb6,
+	0x4b, 0xb8, 0xc4, 0xb7, 0xbf, 0x2d, 0xdd, 0x42, 0xbf, 0xa1, 0xde, 0xe0, 0x93, 0xb5, 0x7b, 0x5b,
+	0x37, 0xbd, 0x01, 0xee, 0x3f, 0x73, 0x33, 0x4a, 0xb7, 0xfb, 0x0e, 0x78, 0x93, 0xb7, 0xa7, 0xa3,
+	0x70, 0x74, 0x72, 0x3c, 0x3e, 0x3d, 0xef, 0xdf, 0x21, 0x7d, 0xe8, 0x0c, 0xcb, 0x96, 0xda, 0xfe,
+	0xb1, 0x6b, 0x82, 0x0a, 0x61, 0x32, 0xa6, 0x6f, 0xc6, 0xb4, 0x4c, 0xb0, 0x96, 0x1a, 0xf1, 0xe1,
+	0x9e, 0xb1, 0xbc, 0x1c, 0x9f, 0x8e, 0xe9, 0xf1, 0xca, 0x53, 0xdf, 0xff, 0x1c, 0xb6, 0xed, 0xbb,
+	0x44, 0xda, 0xd0, 0x7c, 0x7d, 0x3a, 0xa4, 0x6f, 0x71, 0x87, 0x2e, 0x5e, 0xea, 0x9c, 0x8e, 0x87,
+	0xaf, 0x8e, 0x4f, 0x5f, 0xf6, 0x6b, 0x17, 0x5b, 0xfa, 0x97, 0xf8, 0xdb, 0xbf, 0x03, 0x00, 0x00,
+	0xff, 0xff, 0x75, 0x59, 0xf4, 0x03, 0x4e, 0x0b, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.proto
new file mode 100644
index 0000000..e0fe0ec
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/control.proto
@@ -0,0 +1,201 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+import "payloads.proto";
+import "stats.proto";
+
+package grpc.testing;
+
+enum ClientType {
+  SYNC_CLIENT = 0;
+  ASYNC_CLIENT = 1;
+}
+
+enum ServerType {
+  SYNC_SERVER = 0;
+  ASYNC_SERVER = 1;
+  ASYNC_GENERIC_SERVER = 2;
+}
+
+enum RpcType {
+  UNARY = 0;
+  STREAMING = 1;
+}
+
+// Parameters of poisson process distribution, which is a good representation
+// of activity coming in from independent identical stationary sources.
+message PoissonParams {
+  // The rate of arrivals (a.k.a. lambda parameter of the exp distribution).
+  double offered_load = 1;
+}
+
+message UniformParams {
+  double interarrival_lo = 1;
+  double interarrival_hi = 2;
+}
+
+message DeterministicParams {
+  double offered_load = 1;
+}
+
+message ParetoParams {
+  double interarrival_base = 1;
+  double alpha = 2;
+}
+
+// Once an RPC finishes, immediately start a new one.
+// No configuration parameters needed.
+message ClosedLoopParams {
+}
+
+message LoadParams {
+  oneof load {
+    ClosedLoopParams closed_loop = 1;
+    PoissonParams poisson = 2;
+    UniformParams uniform = 3;
+    DeterministicParams determ = 4;
+    ParetoParams pareto = 5;
+  };
+}
+
+// presence of SecurityParams implies use of TLS
+message SecurityParams {
+  bool use_test_ca = 1;
+  string server_host_override = 2;
+}
+
+message ClientConfig {
+  // List of targets to connect to. At least one target needs to be specified.
+  repeated string server_targets = 1;
+  ClientType client_type = 2;
+  SecurityParams security_params = 3;
+  // How many concurrent RPCs to start for each channel.
+  // For synchronous client, use a separate thread for each outstanding RPC.
+  int32 outstanding_rpcs_per_channel = 4;
+  // Number of independent client channels to create.
+  // i-th channel will connect to server_target[i % server_targets.size()]
+  int32 client_channels = 5;
+  // Only for async client. Number of threads to use to start/manage RPCs.
+  int32 async_client_threads = 7;
+  RpcType rpc_type = 8;
+  // The requested load for the entire client (aggregated over all the threads).
+  LoadParams load_params = 10;
+  PayloadConfig payload_config = 11;
+  HistogramParams histogram_params = 12;
+
+  // Specify the cores we should run the client on, if desired
+  repeated int32 core_list = 13;
+  int32 core_limit = 14;
+}
+
+message ClientStatus {
+  ClientStats stats = 1;
+}
+
+// Request current stats
+message Mark {
+  // if true, the stats will be reset after taking their snapshot.
+  bool reset = 1;
+}
+
+message ClientArgs {
+  oneof argtype {
+    ClientConfig setup = 1;
+    Mark mark = 2;
+  }
+}
+
+message ServerConfig {
+  ServerType server_type = 1;
+  SecurityParams security_params = 2;
+  // Port on which to listen. Zero means pick unused port.
+  int32 port = 4;
+  // Only for async server. Number of threads used to serve the requests.
+  int32 async_server_threads = 7;
+  // Specify the number of cores to limit server to, if desired
+  int32 core_limit = 8;
+  // payload config, used in generic server
+  PayloadConfig payload_config = 9;
+
+  // Specify the cores we should run the server on, if desired
+  repeated int32 core_list = 10;
+}
+
+message ServerArgs {
+  oneof argtype {
+    ServerConfig setup = 1;
+    Mark mark = 2;
+  }
+}
+
+message ServerStatus {
+  ServerStats stats = 1;
+  // the port bound by the server
+  int32 port = 2;
+  // Number of cores available to the server
+  int32 cores = 3;
+}
+
+message CoreRequest {
+}
+
+message CoreResponse {
+  // Number of cores available on the server
+  int32 cores = 1;
+}
+
+message Void {
+}
+
+// A single performance scenario: input to qps_json_driver
+message Scenario {
+  // Human readable name for this scenario
+  string name = 1;
+  // Client configuration
+  ClientConfig client_config = 2;
+  // Number of clients to start for the test
+  int32 num_clients = 3;
+  // Server configuration
+  ServerConfig server_config = 4;
+  // Number of servers to start for the test
+  int32 num_servers = 5;
+  // Warmup period, in seconds
+  int32 warmup_seconds = 6;
+  // Benchmark time, in seconds
+  int32 benchmark_seconds = 7;
+  // Number of workers to spawn locally (usually zero)
+  int32 spawn_local_worker_count = 8;
+}
+
+// A set of scenarios to be run with qps_json_driver
+message Scenarios {
+  repeated Scenario scenarios = 1;
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.pb.go
new file mode 100644
index 0000000..214d6d0
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.pb.go
@@ -0,0 +1,345 @@
+// Code generated by protoc-gen-go.
+// source: messages.proto
+// DO NOT EDIT!
+
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// The type of payload that should be returned.
+type PayloadType int32
+
+const (
+	// Compressable text format.
+	PayloadType_COMPRESSABLE PayloadType = 0
+	// Uncompressable binary format.
+	PayloadType_UNCOMPRESSABLE PayloadType = 1
+	// Randomly chosen from all other formats defined in this enum.
+	PayloadType_RANDOM PayloadType = 2
+)
+
+var PayloadType_name = map[int32]string{
+	0: "COMPRESSABLE",
+	1: "UNCOMPRESSABLE",
+	2: "RANDOM",
+}
+var PayloadType_value = map[string]int32{
+	"COMPRESSABLE":   0,
+	"UNCOMPRESSABLE": 1,
+	"RANDOM":         2,
+}
+
+func (x PayloadType) String() string {
+	return proto.EnumName(PayloadType_name, int32(x))
+}
+func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
+
+// Compression algorithms
+type CompressionType int32
+
+const (
+	// No compression
+	CompressionType_NONE    CompressionType = 0
+	CompressionType_GZIP    CompressionType = 1
+	CompressionType_DEFLATE CompressionType = 2
+)
+
+var CompressionType_name = map[int32]string{
+	0: "NONE",
+	1: "GZIP",
+	2: "DEFLATE",
+}
+var CompressionType_value = map[string]int32{
+	"NONE":    0,
+	"GZIP":    1,
+	"DEFLATE": 2,
+}
+
+func (x CompressionType) String() string {
+	return proto.EnumName(CompressionType_name, int32(x))
+}
+func (CompressionType) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
+
+// A block of data, to simply increase gRPC message size.
+type Payload struct {
+	// The type of data in body.
+	Type PayloadType `protobuf:"varint,1,opt,name=type,enum=grpc.testing.PayloadType" json:"type,omitempty"`
+	// Primary contents of payload.
+	Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"`
+}
+
+func (m *Payload) Reset()                    { *m = Payload{} }
+func (m *Payload) String() string            { return proto.CompactTextString(m) }
+func (*Payload) ProtoMessage()               {}
+func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
+
+// A protobuf representation for grpc status. This is used by test
+// clients to specify a status that the server should attempt to return.
+type EchoStatus struct {
+	Code    int32  `protobuf:"varint,1,opt,name=code" json:"code,omitempty"`
+	Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
+}
+
+func (m *EchoStatus) Reset()                    { *m = EchoStatus{} }
+func (m *EchoStatus) String() string            { return proto.CompactTextString(m) }
+func (*EchoStatus) ProtoMessage()               {}
+func (*EchoStatus) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
+
+// Unary request.
+type SimpleRequest struct {
+	// Desired payload type in the response from the server.
+	// If response_type is RANDOM, server randomly chooses one from other formats.
+	ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"`
+	// Desired payload size in the response from the server.
+	// If response_type is COMPRESSABLE, this denotes the size before compression.
+	ResponseSize int32 `protobuf:"varint,2,opt,name=response_size,json=responseSize" json:"response_size,omitempty"`
+	// Optional input payload sent along with the request.
+	Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"`
+	// Whether SimpleResponse should include username.
+	FillUsername bool `protobuf:"varint,4,opt,name=fill_username,json=fillUsername" json:"fill_username,omitempty"`
+	// Whether SimpleResponse should include OAuth scope.
+	FillOauthScope bool `protobuf:"varint,5,opt,name=fill_oauth_scope,json=fillOauthScope" json:"fill_oauth_scope,omitempty"`
+	// Compression algorithm to be used by the server for the response (stream)
+	ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"`
+	// Whether server should return a given status
+	ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"`
+}
+
+func (m *SimpleRequest) Reset()                    { *m = SimpleRequest{} }
+func (m *SimpleRequest) String() string            { return proto.CompactTextString(m) }
+func (*SimpleRequest) ProtoMessage()               {}
+func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{2} }
+
+func (m *SimpleRequest) GetPayload() *Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+func (m *SimpleRequest) GetResponseStatus() *EchoStatus {
+	if m != nil {
+		return m.ResponseStatus
+	}
+	return nil
+}
+
+// Unary response, as configured by the request.
+type SimpleResponse struct {
+	// Payload to increase message size.
+	Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"`
+	// The user the request came from, for verifying authentication was
+	// successful when the client expected it.
+	Username string `protobuf:"bytes,2,opt,name=username" json:"username,omitempty"`
+	// OAuth scope.
+	OauthScope string `protobuf:"bytes,3,opt,name=oauth_scope,json=oauthScope" json:"oauth_scope,omitempty"`
+}
+
+func (m *SimpleResponse) Reset()                    { *m = SimpleResponse{} }
+func (m *SimpleResponse) String() string            { return proto.CompactTextString(m) }
+func (*SimpleResponse) ProtoMessage()               {}
+func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{3} }
+
+func (m *SimpleResponse) GetPayload() *Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+// Client-streaming request.
+type StreamingInputCallRequest struct {
+	// Optional input payload sent along with the request.
+	Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"`
+}
+
+func (m *StreamingInputCallRequest) Reset()                    { *m = StreamingInputCallRequest{} }
+func (m *StreamingInputCallRequest) String() string            { return proto.CompactTextString(m) }
+func (*StreamingInputCallRequest) ProtoMessage()               {}
+func (*StreamingInputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{4} }
+
+func (m *StreamingInputCallRequest) GetPayload() *Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+// Client-streaming response.
+type StreamingInputCallResponse struct {
+	// Aggregated size of payloads received from the client.
+	AggregatedPayloadSize int32 `protobuf:"varint,1,opt,name=aggregated_payload_size,json=aggregatedPayloadSize" json:"aggregated_payload_size,omitempty"`
+}
+
+func (m *StreamingInputCallResponse) Reset()                    { *m = StreamingInputCallResponse{} }
+func (m *StreamingInputCallResponse) String() string            { return proto.CompactTextString(m) }
+func (*StreamingInputCallResponse) ProtoMessage()               {}
+func (*StreamingInputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{5} }
+
+// Configuration for a particular response.
+type ResponseParameters struct {
+	// Desired payload sizes in responses from the server.
+	// If response_type is COMPRESSABLE, this denotes the size before compression.
+	Size int32 `protobuf:"varint,1,opt,name=size" json:"size,omitempty"`
+	// Desired interval between consecutive responses in the response stream in
+	// microseconds.
+	IntervalUs int32 `protobuf:"varint,2,opt,name=interval_us,json=intervalUs" json:"interval_us,omitempty"`
+}
+
+func (m *ResponseParameters) Reset()                    { *m = ResponseParameters{} }
+func (m *ResponseParameters) String() string            { return proto.CompactTextString(m) }
+func (*ResponseParameters) ProtoMessage()               {}
+func (*ResponseParameters) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{6} }
+
+// Server-streaming request.
+type StreamingOutputCallRequest struct {
+	// Desired payload type in the response from the server.
+	// If response_type is RANDOM, the payload from each response in the stream
+	// might be of different types. This is to simulate a mixed type of payload
+	// stream.
+	ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,json=responseType,enum=grpc.testing.PayloadType" json:"response_type,omitempty"`
+	// Configuration for each expected response message.
+	ResponseParameters []*ResponseParameters `protobuf:"bytes,2,rep,name=response_parameters,json=responseParameters" json:"response_parameters,omitempty"`
+	// Optional input payload sent along with the request.
+	Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"`
+	// Compression algorithm to be used by the server for the response (stream)
+	ResponseCompression CompressionType `protobuf:"varint,6,opt,name=response_compression,json=responseCompression,enum=grpc.testing.CompressionType" json:"response_compression,omitempty"`
+	// Whether server should return a given status
+	ResponseStatus *EchoStatus `protobuf:"bytes,7,opt,name=response_status,json=responseStatus" json:"response_status,omitempty"`
+}
+
+func (m *StreamingOutputCallRequest) Reset()                    { *m = StreamingOutputCallRequest{} }
+func (m *StreamingOutputCallRequest) String() string            { return proto.CompactTextString(m) }
+func (*StreamingOutputCallRequest) ProtoMessage()               {}
+func (*StreamingOutputCallRequest) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{7} }
+
+func (m *StreamingOutputCallRequest) GetResponseParameters() []*ResponseParameters {
+	if m != nil {
+		return m.ResponseParameters
+	}
+	return nil
+}
+
+func (m *StreamingOutputCallRequest) GetPayload() *Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+func (m *StreamingOutputCallRequest) GetResponseStatus() *EchoStatus {
+	if m != nil {
+		return m.ResponseStatus
+	}
+	return nil
+}
+
+// Server-streaming response, as configured by the request and parameters.
+type StreamingOutputCallResponse struct {
+	// Payload to increase response size.
+	Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"`
+}
+
+func (m *StreamingOutputCallResponse) Reset()                    { *m = StreamingOutputCallResponse{} }
+func (m *StreamingOutputCallResponse) String() string            { return proto.CompactTextString(m) }
+func (*StreamingOutputCallResponse) ProtoMessage()               {}
+func (*StreamingOutputCallResponse) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{8} }
+
+func (m *StreamingOutputCallResponse) GetPayload() *Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+// For reconnect interop test only.
+// Client tells server what reconnection parameters it used.
+type ReconnectParams struct {
+	MaxReconnectBackoffMs int32 `protobuf:"varint,1,opt,name=max_reconnect_backoff_ms,json=maxReconnectBackoffMs" json:"max_reconnect_backoff_ms,omitempty"`
+}
+
+func (m *ReconnectParams) Reset()                    { *m = ReconnectParams{} }
+func (m *ReconnectParams) String() string            { return proto.CompactTextString(m) }
+func (*ReconnectParams) ProtoMessage()               {}
+func (*ReconnectParams) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{9} }
+
+// For reconnect interop test only.
+// Server tells client whether its reconnects are following the spec and the
+// reconnect backoffs it saw.
+type ReconnectInfo struct {
+	Passed    bool    `protobuf:"varint,1,opt,name=passed" json:"passed,omitempty"`
+	BackoffMs []int32 `protobuf:"varint,2,rep,name=backoff_ms,json=backoffMs" json:"backoff_ms,omitempty"`
+}
+
+func (m *ReconnectInfo) Reset()                    { *m = ReconnectInfo{} }
+func (m *ReconnectInfo) String() string            { return proto.CompactTextString(m) }
+func (*ReconnectInfo) ProtoMessage()               {}
+func (*ReconnectInfo) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{10} }
+
+func init() {
+	proto.RegisterType((*Payload)(nil), "grpc.testing.Payload")
+	proto.RegisterType((*EchoStatus)(nil), "grpc.testing.EchoStatus")
+	proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest")
+	proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse")
+	proto.RegisterType((*StreamingInputCallRequest)(nil), "grpc.testing.StreamingInputCallRequest")
+	proto.RegisterType((*StreamingInputCallResponse)(nil), "grpc.testing.StreamingInputCallResponse")
+	proto.RegisterType((*ResponseParameters)(nil), "grpc.testing.ResponseParameters")
+	proto.RegisterType((*StreamingOutputCallRequest)(nil), "grpc.testing.StreamingOutputCallRequest")
+	proto.RegisterType((*StreamingOutputCallResponse)(nil), "grpc.testing.StreamingOutputCallResponse")
+	proto.RegisterType((*ReconnectParams)(nil), "grpc.testing.ReconnectParams")
+	proto.RegisterType((*ReconnectInfo)(nil), "grpc.testing.ReconnectInfo")
+	proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value)
+	proto.RegisterEnum("grpc.testing.CompressionType", CompressionType_name, CompressionType_value)
+}
+
+var fileDescriptor1 = []byte{
+	// 645 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xcc, 0x55, 0x4d, 0x6f, 0xd3, 0x40,
+	0x10, 0x25, 0xdf, 0xe9, 0x24, 0x4d, 0xa3, 0x85, 0x82, 0x5b, 0x54, 0x51, 0x99, 0x4b, 0x55, 0x89,
+	0x20, 0x15, 0x09, 0x24, 0x0e, 0xa0, 0xb4, 0x4d, 0x51, 0x50, 0x9b, 0x84, 0x75, 0x7b, 0xe1, 0x62,
+	0x6d, 0x9c, 0x4d, 0x1a, 0x11, 0x7b, 0x8d, 0x77, 0x8d, 0x28, 0x07, 0xee, 0xfc, 0x60, 0xee, 0xec,
+	0xae, 0xbd, 0x8e, 0xd3, 0xf6, 0xd0, 0xc2, 0x85, 0xdb, 0xce, 0xcc, 0x9b, 0x97, 0x79, 0x33, 0xcf,
+	0x0a, 0xb4, 0x7c, 0xca, 0x39, 0x99, 0x51, 0xde, 0x09, 0x23, 0x26, 0x18, 0x6a, 0xce, 0xa2, 0xd0,
+	0xeb, 0x08, 0xca, 0xc5, 0x3c, 0x98, 0xd9, 0xa7, 0x50, 0x1b, 0x91, 0xab, 0x05, 0x23, 0x13, 0xf4,
+	0x02, 0xca, 0xe2, 0x2a, 0xa4, 0x56, 0x61, 0xb7, 0xb0, 0xd7, 0x3a, 0xd8, 0xea, 0xe4, 0x71, 0x9d,
+	0x14, 0x74, 0x2e, 0x01, 0x58, 0xc3, 0x10, 0x82, 0xf2, 0x98, 0x4d, 0xae, 0xac, 0xa2, 0x84, 0x37,
+	0xb1, 0x7e, 0xdb, 0x6f, 0x01, 0x7a, 0xde, 0x25, 0x73, 0x04, 0x11, 0x31, 0x57, 0x08, 0x8f, 0x4d,
+	0x12, 0xc2, 0x0a, 0xd6, 0x6f, 0x64, 0x41, 0x2d, 0x9d, 0x47, 0x37, 0xae, 0x61, 0x13, 0xda, 0xbf,
+	0x4a, 0xb0, 0xee, 0xcc, 0xfd, 0x70, 0x41, 0x31, 0xfd, 0x1a, 0xcb, 0x9f, 0x45, 0xef, 0x60, 0x3d,
+	0xa2, 0x3c, 0x64, 0x01, 0xa7, 0xee, 0xdd, 0x26, 0x6b, 0x1a, 0xbc, 0x8a, 0xd0, 0xf3, 0x5c, 0x3f,
+	0x9f, 0xff, 0x48, 0x7e, 0xb1, 0xb2, 0x04, 0x39, 0x32, 0x87, 0x5e, 0x42, 0x2d, 0x4c, 0x18, 0xac,
+	0x92, 0x2c, 0x37, 0x0e, 0x36, 0x6f, 0xa5, 0xc7, 0x06, 0xa5, 0x58, 0xa7, 0xf3, 0xc5, 0xc2, 0x8d,
+	0x39, 0x8d, 0x02, 0xe2, 0x53, 0xab, 0x2c, 0xdb, 0xea, 0xb8, 0xa9, 0x92, 0x17, 0x69, 0x0e, 0xed,
+	0x41, 0x5b, 0x83, 0x18, 0x89, 0xc5, 0xa5, 0xcb, 0x3d, 0x26, 0xa7, 0xaf, 0x68, 0x5c, 0x4b, 0xe5,
+	0x87, 0x2a, 0xed, 0xa8, 0x2c, 0x1a, 0xc1, 0xa3, 0x6c, 0x48, 0x8f, 0xf9, 0xa1, 0x0c, 0xf8, 0x9c,
+	0x05, 0x56, 0x55, 0x6b, 0xdd, 0x59, 0x1d, 0xe6, 0x68, 0x09, 0xd0, 0x7a, 0x1f, 0x9a, 0xd6, 0x5c,
+	0x01, 0x75, 0x61, 0x63, 0x29, 0x5b, 0x5f, 0xc2, 0xaa, 0x69, 0x65, 0xd6, 0x2a, 0xd9, 0xf2, 0x52,
+	0xb8, 0x95, 0xad, 0x44, 0xc7, 0xf6, 0x4f, 0x68, 0x99, 0x53, 0x24, 0xf9, 0xfc, 0x9a, 0x0a, 0x77,
+	0x5a, 0xd3, 0x36, 0xd4, 0xb3, 0x0d, 0x25, 0x97, 0xce, 0x62, 0xf4, 0x0c, 0x1a, 0xf9, 0xc5, 0x94,
+	0x74, 0x19, 0x58, 0xb6, 0x14, 0xe9, 0xca, 0x2d, 0x47, 0x44, 0x94, 0xf8, 0x92, 0xba, 0x1f, 0x84,
+	0xb1, 0x38, 0x22, 0x8b, 0x85, 0xb1, 0xc5, 0x7d, 0x47, 0xb1, 0xcf, 0x61, 0xfb, 0x36, 0xb6, 0x54,
+	0xd9, 0x6b, 0x78, 0x42, 0x66, 0xb3, 0x88, 0xce, 0x88, 0xa0, 0x13, 0x37, 0xed, 0x49, 0xfc, 0x92,
+	0x18, 0x77, 0x73, 0x59, 0x4e, 0xa9, 0x95, 0x71, 0xec, 0x3e, 0x20, 0xc3, 0x31, 0x22, 0x91, 0x94,
+	0x25, 0x68, 0xa4, 0x3d, 0x9f, 0x6b, 0xd5, 0x6f, 0x25, 0x77, 0x1e, 0xc8, 0xea, 0x37, 0xa2, 0x5c,
+	0x93, 0xba, 0x10, 0x4c, 0xea, 0x82, 0xdb, 0xbf, 0x8b, 0xb9, 0x09, 0x87, 0xb1, 0xb8, 0x26, 0xf8,
+	0x5f, 0xbf, 0x83, 0x4f, 0x90, 0xf9, 0x44, 0xea, 0x33, 0xa3, 0xca, 0x39, 0x4a, 0x72, 0x79, 0xbb,
+	0xab, 0x2c, 0x37, 0x25, 0x61, 0x14, 0xdd, 0x94, 0x79, 0xef, 0xaf, 0xe6, 0xbf, 0xb4, 0xf9, 0x00,
+	0x9e, 0xde, 0xba, 0xf6, 0xbf, 0xf4, 0xbc, 0xfd, 0x11, 0x36, 0x30, 0xf5, 0x58, 0x10, 0x50, 0x4f,
+	0xe8, 0x65, 0x71, 0xf4, 0x06, 0x2c, 0x9f, 0x7c, 0x77, 0x23, 0x93, 0x76, 0xc7, 0xc4, 0xfb, 0xc2,
+	0xa6, 0x53, 0xd7, 0xe7, 0xc6, 0x5e, 0xb2, 0x9e, 0x75, 0x1d, 0x26, 0xd5, 0x33, 0x6e, 0x9f, 0xc0,
+	0x7a, 0x96, 0xed, 0x07, 0x53, 0x86, 0x1e, 0x43, 0x35, 0x24, 0x9c, 0xd3, 0x64, 0x98, 0x3a, 0x4e,
+	0x23, 0xb4, 0x03, 0x90, 0xe3, 0x54, 0x47, 0xad, 0xe0, 0xb5, 0xb1, 0xe1, 0xd9, 0x7f, 0x0f, 0x8d,
+	0x9c, 0x33, 0x50, 0x1b, 0x9a, 0x47, 0xc3, 0xb3, 0x11, 0xee, 0x39, 0x4e, 0xf7, 0xf0, 0xb4, 0xd7,
+	0x7e, 0x20, 0x1d, 0xdb, 0xba, 0x18, 0xac, 0xe4, 0x0a, 0x08, 0xa0, 0x8a, 0xbb, 0x83, 0xe3, 0xe1,
+	0x59, 0xbb, 0xb8, 0x7f, 0x00, 0x1b, 0xd7, 0xee, 0x81, 0xea, 0x50, 0x1e, 0x0c, 0x07, 0xaa, 0x59,
+	0xbe, 0x3e, 0x7c, 0xee, 0x8f, 0x64, 0x4b, 0x03, 0x6a, 0xc7, 0xbd, 0x93, 0xd3, 0xee, 0x79, 0xaf,
+	0x5d, 0x1c, 0x57, 0xf5, 0x5f, 0xcd, 0xab, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xc2, 0x6a, 0xce,
+	0x1e, 0x7c, 0x06, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.proto
new file mode 100644
index 0000000..b1abc9e
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/messages.proto
@@ -0,0 +1,172 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Message definitions to be used by integration test service definitions.
+
+syntax = "proto3";
+
+package grpc.testing;
+
+// The type of payload that should be returned.
+enum PayloadType {
+  // Compressable text format.
+  COMPRESSABLE = 0;
+
+  // Uncompressable binary format.
+  UNCOMPRESSABLE = 1;
+
+  // Randomly chosen from all other formats defined in this enum.
+  RANDOM = 2;
+}
+
+// Compression algorithms
+enum CompressionType {
+  // No compression
+  NONE = 0;
+  GZIP = 1;
+  DEFLATE = 2;
+}
+
+// A block of data, to simply increase gRPC message size.
+message Payload {
+  // The type of data in body.
+  PayloadType type = 1;
+  // Primary contents of payload.
+  bytes body = 2;
+}
+
+// A protobuf representation for grpc status. This is used by test
+// clients to specify a status that the server should attempt to return.
+message EchoStatus {
+  int32 code = 1;
+  string message = 2;
+}
+
+// Unary request.
+message SimpleRequest {
+  // Desired payload type in the response from the server.
+  // If response_type is RANDOM, server randomly chooses one from other formats.
+  PayloadType response_type = 1;
+
+  // Desired payload size in the response from the server.
+  // If response_type is COMPRESSABLE, this denotes the size before compression.
+  int32 response_size = 2;
+
+  // Optional input payload sent along with the request.
+  Payload payload = 3;
+
+  // Whether SimpleResponse should include username.
+  bool fill_username = 4;
+
+  // Whether SimpleResponse should include OAuth scope.
+  bool fill_oauth_scope = 5;
+
+  // Compression algorithm to be used by the server for the response (stream)
+  CompressionType response_compression = 6;
+
+  // Whether server should return a given status
+  EchoStatus response_status = 7;
+}
+
+// Unary response, as configured by the request.
+message SimpleResponse {
+  // Payload to increase message size.
+  Payload payload = 1;
+  // The user the request came from, for verifying authentication was
+  // successful when the client expected it.
+  string username = 2;
+  // OAuth scope.
+  string oauth_scope = 3;
+}
+
+// Client-streaming request.
+message StreamingInputCallRequest {
+  // Optional input payload sent along with the request.
+  Payload payload = 1;
+
+  // Not expecting any payload from the response.
+}
+
+// Client-streaming response.
+message StreamingInputCallResponse {
+  // Aggregated size of payloads received from the client.
+  int32 aggregated_payload_size = 1;
+}
+
+// Configuration for a particular response.
+message ResponseParameters {
+  // Desired payload sizes in responses from the server.
+  // If response_type is COMPRESSABLE, this denotes the size before compression.
+  int32 size = 1;
+
+  // Desired interval between consecutive responses in the response stream in
+  // microseconds.
+  int32 interval_us = 2;
+}
+
+// Server-streaming request.
+message StreamingOutputCallRequest {
+  // Desired payload type in the response from the server.
+  // If response_type is RANDOM, the payload from each response in the stream
+  // might be of different types. This is to simulate a mixed type of payload
+  // stream.
+  PayloadType response_type = 1;
+
+  // Configuration for each expected response message.
+  repeated ResponseParameters response_parameters = 2;
+
+  // Optional input payload sent along with the request.
+  Payload payload = 3;
+
+  // Compression algorithm to be used by the server for the response (stream)
+  CompressionType response_compression = 6;
+
+  // Whether server should return a given status
+  EchoStatus response_status = 7;
+}
+
+// Server-streaming response, as configured by the request and parameters.
+message StreamingOutputCallResponse {
+  // Payload to increase response size.
+  Payload payload = 1;
+}
+
+// For reconnect interop test only.
+// Client tells server what reconnection parameters it used.
+message ReconnectParams {
+  int32 max_reconnect_backoff_ms = 1;
+}
+
+// For reconnect interop test only.
+// Server tells client whether its reconnects are following the spec and the
+// reconnect backoffs it saw.
+message ReconnectInfo {
+  bool passed = 1;
+  repeated int32 backoff_ms = 2;
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.pb.go
new file mode 100644
index 0000000..4394d55
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.pb.go
@@ -0,0 +1,221 @@
+// Code generated by protoc-gen-go.
+// source: payloads.proto
+// DO NOT EDIT!
+
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+type ByteBufferParams struct {
+	ReqSize  int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize" json:"req_size,omitempty"`
+	RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize" json:"resp_size,omitempty"`
+}
+
+func (m *ByteBufferParams) Reset()                    { *m = ByteBufferParams{} }
+func (m *ByteBufferParams) String() string            { return proto.CompactTextString(m) }
+func (*ByteBufferParams) ProtoMessage()               {}
+func (*ByteBufferParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} }
+
+type SimpleProtoParams struct {
+	ReqSize  int32 `protobuf:"varint,1,opt,name=req_size,json=reqSize" json:"req_size,omitempty"`
+	RespSize int32 `protobuf:"varint,2,opt,name=resp_size,json=respSize" json:"resp_size,omitempty"`
+}
+
+func (m *SimpleProtoParams) Reset()                    { *m = SimpleProtoParams{} }
+func (m *SimpleProtoParams) String() string            { return proto.CompactTextString(m) }
+func (*SimpleProtoParams) ProtoMessage()               {}
+func (*SimpleProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{1} }
+
+type ComplexProtoParams struct {
+}
+
+func (m *ComplexProtoParams) Reset()                    { *m = ComplexProtoParams{} }
+func (m *ComplexProtoParams) String() string            { return proto.CompactTextString(m) }
+func (*ComplexProtoParams) ProtoMessage()               {}
+func (*ComplexProtoParams) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{2} }
+
+type PayloadConfig struct {
+	// Types that are valid to be assigned to Payload:
+	//	*PayloadConfig_BytebufParams
+	//	*PayloadConfig_SimpleParams
+	//	*PayloadConfig_ComplexParams
+	Payload isPayloadConfig_Payload `protobuf_oneof:"payload"`
+}
+
+func (m *PayloadConfig) Reset()                    { *m = PayloadConfig{} }
+func (m *PayloadConfig) String() string            { return proto.CompactTextString(m) }
+func (*PayloadConfig) ProtoMessage()               {}
+func (*PayloadConfig) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{3} }
+
+type isPayloadConfig_Payload interface {
+	isPayloadConfig_Payload()
+}
+
+type PayloadConfig_BytebufParams struct {
+	BytebufParams *ByteBufferParams `protobuf:"bytes,1,opt,name=bytebuf_params,json=bytebufParams,oneof"`
+}
+type PayloadConfig_SimpleParams struct {
+	SimpleParams *SimpleProtoParams `protobuf:"bytes,2,opt,name=simple_params,json=simpleParams,oneof"`
+}
+type PayloadConfig_ComplexParams struct {
+	ComplexParams *ComplexProtoParams `protobuf:"bytes,3,opt,name=complex_params,json=complexParams,oneof"`
+}
+
+func (*PayloadConfig_BytebufParams) isPayloadConfig_Payload() {}
+func (*PayloadConfig_SimpleParams) isPayloadConfig_Payload()  {}
+func (*PayloadConfig_ComplexParams) isPayloadConfig_Payload() {}
+
+func (m *PayloadConfig) GetPayload() isPayloadConfig_Payload {
+	if m != nil {
+		return m.Payload
+	}
+	return nil
+}
+
+func (m *PayloadConfig) GetBytebufParams() *ByteBufferParams {
+	if x, ok := m.GetPayload().(*PayloadConfig_BytebufParams); ok {
+		return x.BytebufParams
+	}
+	return nil
+}
+
+func (m *PayloadConfig) GetSimpleParams() *SimpleProtoParams {
+	if x, ok := m.GetPayload().(*PayloadConfig_SimpleParams); ok {
+		return x.SimpleParams
+	}
+	return nil
+}
+
+func (m *PayloadConfig) GetComplexParams() *ComplexProtoParams {
+	if x, ok := m.GetPayload().(*PayloadConfig_ComplexParams); ok {
+		return x.ComplexParams
+	}
+	return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*PayloadConfig) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _PayloadConfig_OneofMarshaler, _PayloadConfig_OneofUnmarshaler, _PayloadConfig_OneofSizer, []interface{}{
+		(*PayloadConfig_BytebufParams)(nil),
+		(*PayloadConfig_SimpleParams)(nil),
+		(*PayloadConfig_ComplexParams)(nil),
+	}
+}
+
+func _PayloadConfig_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*PayloadConfig)
+	// payload
+	switch x := m.Payload.(type) {
+	case *PayloadConfig_BytebufParams:
+		b.EncodeVarint(1<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.BytebufParams); err != nil {
+			return err
+		}
+	case *PayloadConfig_SimpleParams:
+		b.EncodeVarint(2<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.SimpleParams); err != nil {
+			return err
+		}
+	case *PayloadConfig_ComplexParams:
+		b.EncodeVarint(3<<3 | proto.WireBytes)
+		if err := b.EncodeMessage(x.ComplexParams); err != nil {
+			return err
+		}
+	case nil:
+	default:
+		return fmt.Errorf("PayloadConfig.Payload has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _PayloadConfig_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*PayloadConfig)
+	switch tag {
+	case 1: // payload.bytebuf_params
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ByteBufferParams)
+		err := b.DecodeMessage(msg)
+		m.Payload = &PayloadConfig_BytebufParams{msg}
+		return true, err
+	case 2: // payload.simple_params
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(SimpleProtoParams)
+		err := b.DecodeMessage(msg)
+		m.Payload = &PayloadConfig_SimpleParams{msg}
+		return true, err
+	case 3: // payload.complex_params
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		msg := new(ComplexProtoParams)
+		err := b.DecodeMessage(msg)
+		m.Payload = &PayloadConfig_ComplexParams{msg}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _PayloadConfig_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*PayloadConfig)
+	// payload
+	switch x := m.Payload.(type) {
+	case *PayloadConfig_BytebufParams:
+		s := proto.Size(x.BytebufParams)
+		n += proto.SizeVarint(1<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *PayloadConfig_SimpleParams:
+		s := proto.Size(x.SimpleParams)
+		n += proto.SizeVarint(2<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case *PayloadConfig_ComplexParams:
+		s := proto.Size(x.ComplexParams)
+		n += proto.SizeVarint(3<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(s))
+		n += s
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+func init() {
+	proto.RegisterType((*ByteBufferParams)(nil), "grpc.testing.ByteBufferParams")
+	proto.RegisterType((*SimpleProtoParams)(nil), "grpc.testing.SimpleProtoParams")
+	proto.RegisterType((*ComplexProtoParams)(nil), "grpc.testing.ComplexProtoParams")
+	proto.RegisterType((*PayloadConfig)(nil), "grpc.testing.PayloadConfig")
+}
+
+var fileDescriptor2 = []byte{
+	// 250 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x2b, 0x48, 0xac, 0xcc,
+	0xc9, 0x4f, 0x4c, 0x29, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x49, 0x2f, 0x2a, 0x48,
+	0xd6, 0x2b, 0x49, 0x2d, 0x2e, 0xc9, 0xcc, 0x4b, 0x57, 0xf2, 0xe2, 0x12, 0x70, 0xaa, 0x2c, 0x49,
+	0x75, 0x2a, 0x4d, 0x4b, 0x4b, 0x2d, 0x0a, 0x48, 0x2c, 0x4a, 0xcc, 0x2d, 0x16, 0x92, 0xe4, 0xe2,
+	0x28, 0x4a, 0x2d, 0x8c, 0x2f, 0xce, 0xac, 0x4a, 0x95, 0x60, 0x54, 0x60, 0xd4, 0x60, 0x0d, 0x62,
+	0x07, 0xf2, 0x83, 0x81, 0x5c, 0x21, 0x69, 0x2e, 0xce, 0xa2, 0xd4, 0xe2, 0x02, 0x88, 0x1c, 0x13,
+	0x58, 0x8e, 0x03, 0x24, 0x00, 0x92, 0x54, 0xf2, 0xe6, 0x12, 0x0c, 0xce, 0xcc, 0x2d, 0xc8, 0x49,
+	0x0d, 0x00, 0x59, 0x44, 0xa1, 0x61, 0x22, 0x5c, 0x42, 0xce, 0xf9, 0x20, 0xc3, 0x2a, 0x90, 0x4c,
+	0x53, 0xfa, 0xc6, 0xc8, 0xc5, 0x1b, 0x00, 0xf1, 0x8f, 0x73, 0x7e, 0x5e, 0x5a, 0x66, 0xba, 0x90,
+	0x3b, 0x17, 0x5f, 0x12, 0xd0, 0x03, 0x49, 0xa5, 0x69, 0xf1, 0x05, 0x60, 0x35, 0x60, 0x5b, 0xb8,
+	0x8d, 0xe4, 0xf4, 0x90, 0xfd, 0xa9, 0x87, 0xee, 0x49, 0x0f, 0x86, 0x20, 0x5e, 0xa8, 0x3e, 0xa8,
+	0x43, 0xdd, 0xb8, 0x78, 0x8b, 0xc1, 0xae, 0x87, 0x99, 0xc3, 0x04, 0x36, 0x47, 0x1e, 0xd5, 0x1c,
+	0x0c, 0x0f, 0x02, 0x0d, 0xe2, 0x81, 0xe8, 0x83, 0x9a, 0xe3, 0xc9, 0xc5, 0x97, 0x0c, 0x71, 0x38,
+	0xcc, 0x20, 0x66, 0xb0, 0x41, 0x0a, 0xa8, 0x06, 0x61, 0x7a, 0x0e, 0xe4, 0x24, 0xa8, 0x4e, 0x88,
+	0x80, 0x13, 0x27, 0x17, 0x3b, 0x34, 0xf2, 0x92, 0xd8, 0xc0, 0x91, 0x67, 0x0c, 0x08, 0x00, 0x00,
+	0xff, 0xff, 0xb0, 0x8c, 0x18, 0x4e, 0xce, 0x01, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.proto
new file mode 100644
index 0000000..056fe0c
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/payloads.proto
@@ -0,0 +1,55 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package grpc.testing;
+
+message ByteBufferParams {
+  int32 req_size = 1;
+  int32 resp_size = 2;
+}
+
+message SimpleProtoParams {
+  int32 req_size = 1;
+  int32 resp_size = 2;
+}
+
+message ComplexProtoParams {
+  // TODO (vpai): Fill this in once the details of complex, representative
+  //              protos are decided
+}
+
+message PayloadConfig {
+  oneof payload {
+    ByteBufferParams bytebuf_params = 1;
+    SimpleProtoParams simple_params = 2;
+    ComplexProtoParams complex_params = 3;
+  }
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.pb.go
new file mode 100644
index 0000000..9f1d8fd
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.pb.go
@@ -0,0 +1,439 @@
+// Code generated by protoc-gen-go.
+// source: services.proto
+// DO NOT EDIT!
+
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+import (
+	context "golang.org/x/net/context"
+	grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
+// Client API for BenchmarkService service
+
+type BenchmarkServiceClient interface {
+	// One request followed by one response.
+	// The server returns the client payload as-is.
+	UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error)
+	// One request followed by one response.
+	// The server returns the client payload as-is.
+	StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error)
+}
+
+type benchmarkServiceClient struct {
+	cc *grpc.ClientConn
+}
+
+func NewBenchmarkServiceClient(cc *grpc.ClientConn) BenchmarkServiceClient {
+	return &benchmarkServiceClient{cc}
+}
+
+func (c *benchmarkServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) {
+	out := new(SimpleResponse)
+	err := grpc.Invoke(ctx, "/grpc.testing.BenchmarkService/UnaryCall", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *benchmarkServiceClient) StreamingCall(ctx context.Context, opts ...grpc.CallOption) (BenchmarkService_StreamingCallClient, error) {
+	stream, err := grpc.NewClientStream(ctx, &_BenchmarkService_serviceDesc.Streams[0], c.cc, "/grpc.testing.BenchmarkService/StreamingCall", opts...)
+	if err != nil {
+		return nil, err
+	}
+	x := &benchmarkServiceStreamingCallClient{stream}
+	return x, nil
+}
+
+type BenchmarkService_StreamingCallClient interface {
+	Send(*SimpleRequest) error
+	Recv() (*SimpleResponse, error)
+	grpc.ClientStream
+}
+
+type benchmarkServiceStreamingCallClient struct {
+	grpc.ClientStream
+}
+
+func (x *benchmarkServiceStreamingCallClient) Send(m *SimpleRequest) error {
+	return x.ClientStream.SendMsg(m)
+}
+
+func (x *benchmarkServiceStreamingCallClient) Recv() (*SimpleResponse, error) {
+	m := new(SimpleResponse)
+	if err := x.ClientStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+// Server API for BenchmarkService service
+
+type BenchmarkServiceServer interface {
+	// One request followed by one response.
+	// The server returns the client payload as-is.
+	UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error)
+	// One request followed by one response.
+	// The server returns the client payload as-is.
+	StreamingCall(BenchmarkService_StreamingCallServer) error
+}
+
+func RegisterBenchmarkServiceServer(s *grpc.Server, srv BenchmarkServiceServer) {
+	s.RegisterService(&_BenchmarkService_serviceDesc, srv)
+}
+
+func _BenchmarkService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(SimpleRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(BenchmarkServiceServer).UnaryCall(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.BenchmarkService/UnaryCall",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(BenchmarkServiceServer).UnaryCall(ctx, req.(*SimpleRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _BenchmarkService_StreamingCall_Handler(srv interface{}, stream grpc.ServerStream) error {
+	return srv.(BenchmarkServiceServer).StreamingCall(&benchmarkServiceStreamingCallServer{stream})
+}
+
+type BenchmarkService_StreamingCallServer interface {
+	Send(*SimpleResponse) error
+	Recv() (*SimpleRequest, error)
+	grpc.ServerStream
+}
+
+type benchmarkServiceStreamingCallServer struct {
+	grpc.ServerStream
+}
+
+func (x *benchmarkServiceStreamingCallServer) Send(m *SimpleResponse) error {
+	return x.ServerStream.SendMsg(m)
+}
+
+func (x *benchmarkServiceStreamingCallServer) Recv() (*SimpleRequest, error) {
+	m := new(SimpleRequest)
+	if err := x.ServerStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+var _BenchmarkService_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "grpc.testing.BenchmarkService",
+	HandlerType: (*BenchmarkServiceServer)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "UnaryCall",
+			Handler:    _BenchmarkService_UnaryCall_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{
+		{
+			StreamName:    "StreamingCall",
+			Handler:       _BenchmarkService_StreamingCall_Handler,
+			ServerStreams: true,
+			ClientStreams: true,
+		},
+	},
+}
+
+// Client API for WorkerService service
+
+type WorkerServiceClient interface {
+	// Start server with specified workload.
+	// First request sent specifies the ServerConfig followed by ServerStatus
+	// response. After that, a "Mark" can be sent anytime to request the latest
+	// stats. Closing the stream will initiate shutdown of the test server
+	// and once the shutdown has finished, the OK status is sent to terminate
+	// this RPC.
+	RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error)
+	// Start client with specified workload.
+	// First request sent specifies the ClientConfig followed by ClientStatus
+	// response. After that, a "Mark" can be sent anytime to request the latest
+	// stats. Closing the stream will initiate shutdown of the test client
+	// and once the shutdown has finished, the OK status is sent to terminate
+	// this RPC.
+	RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error)
+	// Just return the core count - unary call
+	CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error)
+	// Quit this worker
+	QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error)
+}
+
+type workerServiceClient struct {
+	cc *grpc.ClientConn
+}
+
+func NewWorkerServiceClient(cc *grpc.ClientConn) WorkerServiceClient {
+	return &workerServiceClient{cc}
+}
+
+func (c *workerServiceClient) RunServer(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunServerClient, error) {
+	stream, err := grpc.NewClientStream(ctx, &_WorkerService_serviceDesc.Streams[0], c.cc, "/grpc.testing.WorkerService/RunServer", opts...)
+	if err != nil {
+		return nil, err
+	}
+	x := &workerServiceRunServerClient{stream}
+	return x, nil
+}
+
+type WorkerService_RunServerClient interface {
+	Send(*ServerArgs) error
+	Recv() (*ServerStatus, error)
+	grpc.ClientStream
+}
+
+type workerServiceRunServerClient struct {
+	grpc.ClientStream
+}
+
+func (x *workerServiceRunServerClient) Send(m *ServerArgs) error {
+	return x.ClientStream.SendMsg(m)
+}
+
+func (x *workerServiceRunServerClient) Recv() (*ServerStatus, error) {
+	m := new(ServerStatus)
+	if err := x.ClientStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+func (c *workerServiceClient) RunClient(ctx context.Context, opts ...grpc.CallOption) (WorkerService_RunClientClient, error) {
+	stream, err := grpc.NewClientStream(ctx, &_WorkerService_serviceDesc.Streams[1], c.cc, "/grpc.testing.WorkerService/RunClient", opts...)
+	if err != nil {
+		return nil, err
+	}
+	x := &workerServiceRunClientClient{stream}
+	return x, nil
+}
+
+type WorkerService_RunClientClient interface {
+	Send(*ClientArgs) error
+	Recv() (*ClientStatus, error)
+	grpc.ClientStream
+}
+
+type workerServiceRunClientClient struct {
+	grpc.ClientStream
+}
+
+func (x *workerServiceRunClientClient) Send(m *ClientArgs) error {
+	return x.ClientStream.SendMsg(m)
+}
+
+func (x *workerServiceRunClientClient) Recv() (*ClientStatus, error) {
+	m := new(ClientStatus)
+	if err := x.ClientStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+func (c *workerServiceClient) CoreCount(ctx context.Context, in *CoreRequest, opts ...grpc.CallOption) (*CoreResponse, error) {
+	out := new(CoreResponse)
+	err := grpc.Invoke(ctx, "/grpc.testing.WorkerService/CoreCount", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+func (c *workerServiceClient) QuitWorker(ctx context.Context, in *Void, opts ...grpc.CallOption) (*Void, error) {
+	out := new(Void)
+	err := grpc.Invoke(ctx, "/grpc.testing.WorkerService/QuitWorker", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+// Server API for WorkerService service
+
+type WorkerServiceServer interface {
+	// Start server with specified workload.
+	// First request sent specifies the ServerConfig followed by ServerStatus
+	// response. After that, a "Mark" can be sent anytime to request the latest
+	// stats. Closing the stream will initiate shutdown of the test server
+	// and once the shutdown has finished, the OK status is sent to terminate
+	// this RPC.
+	RunServer(WorkerService_RunServerServer) error
+	// Start client with specified workload.
+	// First request sent specifies the ClientConfig followed by ClientStatus
+	// response. After that, a "Mark" can be sent anytime to request the latest
+	// stats. Closing the stream will initiate shutdown of the test client
+	// and once the shutdown has finished, the OK status is sent to terminate
+	// this RPC.
+	RunClient(WorkerService_RunClientServer) error
+	// Just return the core count - unary call
+	CoreCount(context.Context, *CoreRequest) (*CoreResponse, error)
+	// Quit this worker
+	QuitWorker(context.Context, *Void) (*Void, error)
+}
+
+func RegisterWorkerServiceServer(s *grpc.Server, srv WorkerServiceServer) {
+	s.RegisterService(&_WorkerService_serviceDesc, srv)
+}
+
+func _WorkerService_RunServer_Handler(srv interface{}, stream grpc.ServerStream) error {
+	return srv.(WorkerServiceServer).RunServer(&workerServiceRunServerServer{stream})
+}
+
+type WorkerService_RunServerServer interface {
+	Send(*ServerStatus) error
+	Recv() (*ServerArgs, error)
+	grpc.ServerStream
+}
+
+type workerServiceRunServerServer struct {
+	grpc.ServerStream
+}
+
+func (x *workerServiceRunServerServer) Send(m *ServerStatus) error {
+	return x.ServerStream.SendMsg(m)
+}
+
+func (x *workerServiceRunServerServer) Recv() (*ServerArgs, error) {
+	m := new(ServerArgs)
+	if err := x.ServerStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+func _WorkerService_RunClient_Handler(srv interface{}, stream grpc.ServerStream) error {
+	return srv.(WorkerServiceServer).RunClient(&workerServiceRunClientServer{stream})
+}
+
+type WorkerService_RunClientServer interface {
+	Send(*ClientStatus) error
+	Recv() (*ClientArgs, error)
+	grpc.ServerStream
+}
+
+type workerServiceRunClientServer struct {
+	grpc.ServerStream
+}
+
+func (x *workerServiceRunClientServer) Send(m *ClientStatus) error {
+	return x.ServerStream.SendMsg(m)
+}
+
+func (x *workerServiceRunClientServer) Recv() (*ClientArgs, error) {
+	m := new(ClientArgs)
+	if err := x.ServerStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+func _WorkerService_CoreCount_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(CoreRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(WorkerServiceServer).CoreCount(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.WorkerService/CoreCount",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(WorkerServiceServer).CoreCount(ctx, req.(*CoreRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+func _WorkerService_QuitWorker_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(Void)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(WorkerServiceServer).QuitWorker(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.WorkerService/QuitWorker",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(WorkerServiceServer).QuitWorker(ctx, req.(*Void))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+var _WorkerService_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "grpc.testing.WorkerService",
+	HandlerType: (*WorkerServiceServer)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "CoreCount",
+			Handler:    _WorkerService_CoreCount_Handler,
+		},
+		{
+			MethodName: "QuitWorker",
+			Handler:    _WorkerService_QuitWorker_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{
+		{
+			StreamName:    "RunServer",
+			Handler:       _WorkerService_RunServer_Handler,
+			ServerStreams: true,
+			ClientStreams: true,
+		},
+		{
+			StreamName:    "RunClient",
+			Handler:       _WorkerService_RunClient_Handler,
+			ServerStreams: true,
+			ClientStreams: true,
+		},
+	},
+}
+
+var fileDescriptor3 = []byte{
+	// 254 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x91, 0xc1, 0x4a, 0xc4, 0x30,
+	0x10, 0x86, 0xa9, 0x07, 0xa1, 0xc1, 0x2e, 0x92, 0x93, 0x46, 0x1f, 0xc0, 0x53, 0x91, 0xd5, 0x17,
+	0x70, 0x8b, 0x1e, 0x05, 0xb7, 0xa8, 0xe7, 0x58, 0x87, 0x1a, 0x36, 0x4d, 0xea, 0xcc, 0x44, 0xf0,
+	0x49, 0x7c, 0x07, 0x9f, 0xd2, 0xee, 0x66, 0x0b, 0xb5, 0xe4, 0xb6, 0xc7, 0xf9, 0xbf, 0xe1, 0x23,
+	0x7f, 0x46, 0x2c, 0x08, 0xf0, 0xcb, 0x34, 0x40, 0x65, 0x8f, 0x9e, 0xbd, 0x3c, 0x69, 0xb1, 0x6f,
+	0x4a, 0x06, 0x62, 0xe3, 0x5a, 0xb5, 0xe8, 0x80, 0x48, 0xb7, 0x23, 0x55, 0x45, 0xe3, 0x1d, 0xa3,
+	0xb7, 0x71, 0x5c, 0xfe, 0x66, 0xe2, 0x74, 0x05, 0xae, 0xf9, 0xe8, 0x34, 0x6e, 0xea, 0x28, 0x92,
+	0x0f, 0x22, 0x7f, 0x76, 0x1a, 0xbf, 0x2b, 0x6d, 0xad, 0xbc, 0x28, 0xa7, 0xbe, 0xb2, 0x36, 0x5d,
+	0x6f, 0x61, 0x0d, 0x9f, 0x61, 0x08, 0xd4, 0x65, 0x1a, 0x52, 0xef, 0x1d, 0x81, 0x7c, 0x14, 0x45,
+	0xcd, 0x08, 0xba, 0x1b, 0xd8, 0x81, 0xae, 0xab, 0xec, 0x3a, 0x5b, 0xfe, 0x1c, 0x89, 0xe2, 0xd5,
+	0xe3, 0x06, 0x70, 0x7c, 0xe9, 0xbd, 0xc8, 0xd7, 0xc1, 0x6d, 0x27, 0x40, 0x79, 0x36, 0x13, 0xec,
+	0xd2, 0x3b, 0x6c, 0x49, 0xa9, 0x14, 0xa9, 0x59, 0x73, 0xa0, 0xad, 0x78, 0xaf, 0xa9, 0xac, 0x01,
+	0xc7, 0x73, 0x4d, 0x4c, 0x53, 0x9a, 0x48, 0x26, 0x9a, 0x95, 0xc8, 0x2b, 0x8f, 0x50, 0xf9, 0x30,
+	0x68, 0xce, 0x67, 0xcb, 0x03, 0x18, 0x9b, 0xaa, 0x14, 0xda, 0xff, 0xd9, 0xad, 0x10, 0x4f, 0xc1,
+	0x70, 0xac, 0x29, 0xe5, 0xff, 0xcd, 0x17, 0x6f, 0xde, 0x55, 0x22, 0x7b, 0x3b, 0xde, 0x5d, 0xf3,
+	0xe6, 0x2f, 0x00, 0x00, 0xff, 0xff, 0x3b, 0x84, 0x02, 0xe3, 0x0c, 0x02, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.proto
new file mode 100644
index 0000000..c2acca7
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/services.proto
@@ -0,0 +1,71 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// An integration test service that covers all the method signature permutations
+// of unary/streaming requests/responses.
+syntax = "proto3";
+
+import "messages.proto";
+import "control.proto";
+
+package grpc.testing;
+
+service BenchmarkService {
+  // One request followed by one response.
+  // The server returns the client payload as-is.
+  rpc UnaryCall(SimpleRequest) returns (SimpleResponse);
+
+  // One request followed by one response.
+  // The server returns the client payload as-is.
+  rpc StreamingCall(stream SimpleRequest) returns (stream SimpleResponse);
+}
+
+service WorkerService {
+  // Start server with specified workload.
+  // First request sent specifies the ServerConfig followed by ServerStatus
+  // response. After that, a "Mark" can be sent anytime to request the latest
+  // stats. Closing the stream will initiate shutdown of the test server
+  // and once the shutdown has finished, the OK status is sent to terminate
+  // this RPC.
+  rpc RunServer(stream ServerArgs) returns (stream ServerStatus);
+
+  // Start client with specified workload.
+  // First request sent specifies the ClientConfig followed by ClientStatus
+  // response. After that, a "Mark" can be sent anytime to request the latest
+  // stats. Closing the stream will initiate shutdown of the test client
+  // and once the shutdown has finished, the OK status is sent to terminate
+  // this RPC.
+  rpc RunClient(stream ClientArgs) returns (stream ClientStatus);
+
+  // Just return the core count - unary call
+  rpc CoreCount(CoreRequest) returns (CoreResponse);
+
+  // Quit this worker
+  rpc QuitWorker(Void) returns (Void);
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.pb.go
new file mode 100644
index 0000000..ef04acc
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.pb.go
@@ -0,0 +1,109 @@
+// Code generated by protoc-gen-go.
+// source: stats.proto
+// DO NOT EDIT!
+
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+type ServerStats struct {
+	// wall clock time change in seconds since last reset
+	TimeElapsed float64 `protobuf:"fixed64,1,opt,name=time_elapsed,json=timeElapsed" json:"time_elapsed,omitempty"`
+	// change in user time (in seconds) used by the server since last reset
+	TimeUser float64 `protobuf:"fixed64,2,opt,name=time_user,json=timeUser" json:"time_user,omitempty"`
+	// change in server time (in seconds) used by the server process and all
+	// threads since last reset
+	TimeSystem float64 `protobuf:"fixed64,3,opt,name=time_system,json=timeSystem" json:"time_system,omitempty"`
+}
+
+func (m *ServerStats) Reset()                    { *m = ServerStats{} }
+func (m *ServerStats) String() string            { return proto.CompactTextString(m) }
+func (*ServerStats) ProtoMessage()               {}
+func (*ServerStats) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{0} }
+
+// Histogram params based on grpc/support/histogram.c
+type HistogramParams struct {
+	Resolution  float64 `protobuf:"fixed64,1,opt,name=resolution" json:"resolution,omitempty"`
+	MaxPossible float64 `protobuf:"fixed64,2,opt,name=max_possible,json=maxPossible" json:"max_possible,omitempty"`
+}
+
+func (m *HistogramParams) Reset()                    { *m = HistogramParams{} }
+func (m *HistogramParams) String() string            { return proto.CompactTextString(m) }
+func (*HistogramParams) ProtoMessage()               {}
+func (*HistogramParams) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{1} }
+
+// Histogram data based on grpc/support/histogram.c
+type HistogramData struct {
+	Bucket       []uint32 `protobuf:"varint,1,rep,name=bucket" json:"bucket,omitempty"`
+	MinSeen      float64  `protobuf:"fixed64,2,opt,name=min_seen,json=minSeen" json:"min_seen,omitempty"`
+	MaxSeen      float64  `protobuf:"fixed64,3,opt,name=max_seen,json=maxSeen" json:"max_seen,omitempty"`
+	Sum          float64  `protobuf:"fixed64,4,opt,name=sum" json:"sum,omitempty"`
+	SumOfSquares float64  `protobuf:"fixed64,5,opt,name=sum_of_squares,json=sumOfSquares" json:"sum_of_squares,omitempty"`
+	Count        float64  `protobuf:"fixed64,6,opt,name=count" json:"count,omitempty"`
+}
+
+func (m *HistogramData) Reset()                    { *m = HistogramData{} }
+func (m *HistogramData) String() string            { return proto.CompactTextString(m) }
+func (*HistogramData) ProtoMessage()               {}
+func (*HistogramData) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{2} }
+
+type ClientStats struct {
+	// Latency histogram. Data points are in nanoseconds.
+	Latencies *HistogramData `protobuf:"bytes,1,opt,name=latencies" json:"latencies,omitempty"`
+	// See ServerStats for details.
+	TimeElapsed float64 `protobuf:"fixed64,2,opt,name=time_elapsed,json=timeElapsed" json:"time_elapsed,omitempty"`
+	TimeUser    float64 `protobuf:"fixed64,3,opt,name=time_user,json=timeUser" json:"time_user,omitempty"`
+	TimeSystem  float64 `protobuf:"fixed64,4,opt,name=time_system,json=timeSystem" json:"time_system,omitempty"`
+}
+
+func (m *ClientStats) Reset()                    { *m = ClientStats{} }
+func (m *ClientStats) String() string            { return proto.CompactTextString(m) }
+func (*ClientStats) ProtoMessage()               {}
+func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor4, []int{3} }
+
+func (m *ClientStats) GetLatencies() *HistogramData {
+	if m != nil {
+		return m.Latencies
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*ServerStats)(nil), "grpc.testing.ServerStats")
+	proto.RegisterType((*HistogramParams)(nil), "grpc.testing.HistogramParams")
+	proto.RegisterType((*HistogramData)(nil), "grpc.testing.HistogramData")
+	proto.RegisterType((*ClientStats)(nil), "grpc.testing.ClientStats")
+}
+
+var fileDescriptor4 = []byte{
+	// 342 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x84, 0x92, 0x4f, 0x4f, 0xe3, 0x30,
+	0x10, 0xc5, 0x95, 0xa6, 0xed, 0xb6, 0x93, 0x76, 0x77, 0x65, 0xad, 0x56, 0x41, 0x95, 0xf8, 0x13,
+	0x71, 0xe8, 0x29, 0x07, 0x38, 0x71, 0x06, 0x24, 0x6e, 0x54, 0x0d, 0x9c, 0x23, 0x37, 0x4c, 0x2b,
+	0x8b, 0xc4, 0x0e, 0x99, 0x09, 0x2a, 0x1f, 0x09, 0xf1, 0x25, 0x71, 0x9c, 0x08, 0x0a, 0x48, 0x70,
+	0x49, 0xf2, 0x7e, 0x6f, 0x34, 0xe3, 0xc9, 0x33, 0x04, 0xc4, 0x92, 0x29, 0x2e, 0x2b, 0xc3, 0x46,
+	0x4c, 0x36, 0x55, 0x99, 0xc5, 0x8c, 0xc4, 0x4a, 0x6f, 0x22, 0x0d, 0x41, 0x82, 0xd5, 0x23, 0x56,
+	0x49, 0x53, 0x22, 0x8e, 0x60, 0xc2, 0xaa, 0xc0, 0x14, 0x73, 0x59, 0x12, 0xde, 0x85, 0xde, 0xa1,
+	0x37, 0xf7, 0x96, 0x41, 0xc3, 0x2e, 0x5b, 0x24, 0x66, 0x30, 0x76, 0x25, 0x35, 0x61, 0x15, 0xf6,
+	0x9c, 0x3f, 0x6a, 0xc0, 0xad, 0xd5, 0xe2, 0x00, 0x5c, 0x6d, 0x4a, 0x4f, 0xc4, 0x58, 0x84, 0xbe,
+	0xb3, 0xa1, 0x41, 0x89, 0x23, 0xd1, 0x0d, 0xfc, 0xb9, 0x52, 0xc4, 0x66, 0x53, 0xc9, 0x62, 0x21,
+	0xed, 0x83, 0xc4, 0x3e, 0x40, 0x85, 0x64, 0xf2, 0x9a, 0x95, 0xd1, 0xdd, 0xc4, 0x1d, 0xd2, 0x9c,
+	0xa9, 0x90, 0xdb, 0xb4, 0x34, 0x44, 0x6a, 0x95, 0x63, 0x37, 0x33, 0xb0, 0x6c, 0xd1, 0xa1, 0xe8,
+	0xc5, 0x83, 0xe9, 0x5b, 0xdb, 0x0b, 0xc9, 0x52, 0xfc, 0x87, 0xe1, 0xaa, 0xce, 0xee, 0x91, 0x6d,
+	0x43, 0x7f, 0x3e, 0x5d, 0x76, 0x4a, 0xec, 0xc1, 0xa8, 0x50, 0x3a, 0x25, 0x44, 0xdd, 0x35, 0xfa,
+	0x65, 0x75, 0x62, 0xa5, 0xb3, 0xec, 0x1c, 0x67, 0xf9, 0x9d, 0x25, 0xb7, 0xce, 0xfa, 0x0b, 0x3e,
+	0xd5, 0x45, 0xd8, 0x77, 0xb4, 0xf9, 0x14, 0xc7, 0xf0, 0xdb, 0xbe, 0x52, 0xb3, 0x4e, 0xe9, 0xa1,
+	0x96, 0xf6, 0xb4, 0xe1, 0xc0, 0x99, 0x13, 0x4b, 0xaf, 0xd7, 0x49, 0xcb, 0xc4, 0x3f, 0x18, 0x64,
+	0xa6, 0xd6, 0x1c, 0x0e, 0x9d, 0xd9, 0x8a, 0xe8, 0xd9, 0x83, 0xe0, 0x3c, 0x57, 0xa8, 0xb9, 0xfd,
+	0xe9, 0x67, 0x30, 0xce, 0x25, 0xa3, 0xce, 0x94, 0x6d, 0xd3, 0xec, 0x1f, 0x9c, 0xcc, 0xe2, 0xdd,
+	0x94, 0xe2, 0x0f, 0xbb, 0x2d, 0xdf, 0xab, 0xbf, 0xe4, 0xd5, 0xfb, 0x21, 0x2f, 0xff, 0xfb, 0xbc,
+	0xfa, 0x9f, 0xf3, 0x5a, 0x0d, 0xdd, 0xa5, 0x39, 0x7d, 0x0d, 0x00, 0x00, 0xff, 0xff, 0xea, 0x75,
+	0x34, 0x90, 0x43, 0x02, 0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.proto
new file mode 100644
index 0000000..9bc3cb2
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/grpc_testing/stats.proto
@@ -0,0 +1,70 @@
+// Copyright 2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package grpc.testing;
+
+message ServerStats {
+  // wall clock time change in seconds since last reset
+  double time_elapsed = 1;
+
+  // change in user time (in seconds) used by the server since last reset
+  double time_user = 2;
+
+  // change in server time (in seconds) used by the server process and all
+  // threads since last reset
+  double time_system = 3;
+}
+
+// Histogram params based on grpc/support/histogram.c
+message HistogramParams {
+  double resolution = 1;    // first bucket is [0, 1 + resolution)
+  double max_possible = 2;  // use enough buckets to allow this value
+}
+
+// Histogram data based on grpc/support/histogram.c
+message HistogramData {
+  repeated uint32 bucket = 1;
+  double min_seen = 2;
+  double max_seen = 3;
+  double sum = 4;
+  double sum_of_squares = 5;
+  double count = 6;
+}
+
+message ClientStats {
+  // Latency histogram. Data points are in nanoseconds.
+  HistogramData latencies = 1;
+
+  // See ServerStats for details.
+  double time_elapsed = 2;
+  double time_user = 3;
+  double time_system = 4;
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.pb.go b/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.pb.go
deleted file mode 100644
index c080709..0000000
--- a/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.pb.go
+++ /dev/null
@@ -1,941 +0,0 @@
-// Code generated by protoc-gen-go.
-// source: test.proto
-// DO NOT EDIT!
-
-/*
-Package grpc_testing is a generated protocol buffer package.
-
-It is generated from these files:
-	test.proto
-
-It has these top-level messages:
-	StatsRequest
-	ServerStats
-	Payload
-	HistogramData
-	ClientConfig
-	Mark
-	ClientArgs
-	ClientStats
-	ClientStatus
-	ServerConfig
-	ServerArgs
-	ServerStatus
-	SimpleRequest
-	SimpleResponse
-*/
-package grpc_testing
-
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-
-import (
-	context "golang.org/x/net/context"
-	grpc "google.golang.org/grpc"
-)
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ = proto.Marshal
-var _ = fmt.Errorf
-var _ = math.Inf
-
-// This is a compile-time assertion to ensure that this generated file
-// is compatible with the proto package it is being compiled against.
-const _ = proto.ProtoPackageIsVersion1
-
-type PayloadType int32
-
-const (
-	// Compressable text format.
-	PayloadType_COMPRESSABLE PayloadType = 0
-	// Uncompressable binary format.
-	PayloadType_UNCOMPRESSABLE PayloadType = 1
-	// Randomly chosen from all other formats defined in this enum.
-	PayloadType_RANDOM PayloadType = 2
-)
-
-var PayloadType_name = map[int32]string{
-	0: "COMPRESSABLE",
-	1: "UNCOMPRESSABLE",
-	2: "RANDOM",
-}
-var PayloadType_value = map[string]int32{
-	"COMPRESSABLE":   0,
-	"UNCOMPRESSABLE": 1,
-	"RANDOM":         2,
-}
-
-func (x PayloadType) String() string {
-	return proto.EnumName(PayloadType_name, int32(x))
-}
-func (PayloadType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
-
-type ClientType int32
-
-const (
-	ClientType_SYNCHRONOUS_CLIENT ClientType = 0
-	ClientType_ASYNC_CLIENT       ClientType = 1
-)
-
-var ClientType_name = map[int32]string{
-	0: "SYNCHRONOUS_CLIENT",
-	1: "ASYNC_CLIENT",
-}
-var ClientType_value = map[string]int32{
-	"SYNCHRONOUS_CLIENT": 0,
-	"ASYNC_CLIENT":       1,
-}
-
-func (x ClientType) String() string {
-	return proto.EnumName(ClientType_name, int32(x))
-}
-func (ClientType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
-
-type ServerType int32
-
-const (
-	ServerType_SYNCHRONOUS_SERVER ServerType = 0
-	ServerType_ASYNC_SERVER       ServerType = 1
-)
-
-var ServerType_name = map[int32]string{
-	0: "SYNCHRONOUS_SERVER",
-	1: "ASYNC_SERVER",
-}
-var ServerType_value = map[string]int32{
-	"SYNCHRONOUS_SERVER": 0,
-	"ASYNC_SERVER":       1,
-}
-
-func (x ServerType) String() string {
-	return proto.EnumName(ServerType_name, int32(x))
-}
-func (ServerType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
-
-type RpcType int32
-
-const (
-	RpcType_UNARY     RpcType = 0
-	RpcType_STREAMING RpcType = 1
-)
-
-var RpcType_name = map[int32]string{
-	0: "UNARY",
-	1: "STREAMING",
-}
-var RpcType_value = map[string]int32{
-	"UNARY":     0,
-	"STREAMING": 1,
-}
-
-func (x RpcType) String() string {
-	return proto.EnumName(RpcType_name, int32(x))
-}
-func (RpcType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
-
-type StatsRequest struct {
-	// run number
-	TestNum int32 `protobuf:"varint,1,opt,name=test_num" json:"test_num,omitempty"`
-}
-
-func (m *StatsRequest) Reset()                    { *m = StatsRequest{} }
-func (m *StatsRequest) String() string            { return proto.CompactTextString(m) }
-func (*StatsRequest) ProtoMessage()               {}
-func (*StatsRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
-
-type ServerStats struct {
-	// wall clock time
-	TimeElapsed float64 `protobuf:"fixed64,1,opt,name=time_elapsed" json:"time_elapsed,omitempty"`
-	// user time used by the server process and threads
-	TimeUser float64 `protobuf:"fixed64,2,opt,name=time_user" json:"time_user,omitempty"`
-	// server time used by the server process and all threads
-	TimeSystem float64 `protobuf:"fixed64,3,opt,name=time_system" json:"time_system,omitempty"`
-}
-
-func (m *ServerStats) Reset()                    { *m = ServerStats{} }
-func (m *ServerStats) String() string            { return proto.CompactTextString(m) }
-func (*ServerStats) ProtoMessage()               {}
-func (*ServerStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
-
-type Payload struct {
-	// The type of data in body.
-	Type PayloadType `protobuf:"varint,1,opt,name=type,enum=grpc.testing.PayloadType" json:"type,omitempty"`
-	// Primary contents of payload.
-	Body []byte `protobuf:"bytes,2,opt,name=body,proto3" json:"body,omitempty"`
-}
-
-func (m *Payload) Reset()                    { *m = Payload{} }
-func (m *Payload) String() string            { return proto.CompactTextString(m) }
-func (*Payload) ProtoMessage()               {}
-func (*Payload) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
-
-type HistogramData struct {
-	Bucket       []uint32 `protobuf:"varint,1,rep,name=bucket" json:"bucket,omitempty"`
-	MinSeen      float64  `protobuf:"fixed64,2,opt,name=min_seen" json:"min_seen,omitempty"`
-	MaxSeen      float64  `protobuf:"fixed64,3,opt,name=max_seen" json:"max_seen,omitempty"`
-	Sum          float64  `protobuf:"fixed64,4,opt,name=sum" json:"sum,omitempty"`
-	SumOfSquares float64  `protobuf:"fixed64,5,opt,name=sum_of_squares" json:"sum_of_squares,omitempty"`
-	Count        float64  `protobuf:"fixed64,6,opt,name=count" json:"count,omitempty"`
-}
-
-func (m *HistogramData) Reset()                    { *m = HistogramData{} }
-func (m *HistogramData) String() string            { return proto.CompactTextString(m) }
-func (*HistogramData) ProtoMessage()               {}
-func (*HistogramData) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
-
-type ClientConfig struct {
-	ServerTargets             []string   `protobuf:"bytes,1,rep,name=server_targets" json:"server_targets,omitempty"`
-	ClientType                ClientType `protobuf:"varint,2,opt,name=client_type,enum=grpc.testing.ClientType" json:"client_type,omitempty"`
-	EnableSsl                 bool       `protobuf:"varint,3,opt,name=enable_ssl" json:"enable_ssl,omitempty"`
-	OutstandingRpcsPerChannel int32      `protobuf:"varint,4,opt,name=outstanding_rpcs_per_channel" json:"outstanding_rpcs_per_channel,omitempty"`
-	ClientChannels            int32      `protobuf:"varint,5,opt,name=client_channels" json:"client_channels,omitempty"`
-	PayloadSize               int32      `protobuf:"varint,6,opt,name=payload_size" json:"payload_size,omitempty"`
-	// only for async client:
-	AsyncClientThreads int32   `protobuf:"varint,7,opt,name=async_client_threads" json:"async_client_threads,omitempty"`
-	RpcType            RpcType `protobuf:"varint,8,opt,name=rpc_type,enum=grpc.testing.RpcType" json:"rpc_type,omitempty"`
-}
-
-func (m *ClientConfig) Reset()                    { *m = ClientConfig{} }
-func (m *ClientConfig) String() string            { return proto.CompactTextString(m) }
-func (*ClientConfig) ProtoMessage()               {}
-func (*ClientConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
-
-// Request current stats
-type Mark struct {
-}
-
-func (m *Mark) Reset()                    { *m = Mark{} }
-func (m *Mark) String() string            { return proto.CompactTextString(m) }
-func (*Mark) ProtoMessage()               {}
-func (*Mark) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
-
-type ClientArgs struct {
-	// Types that are valid to be assigned to Argtype:
-	//	*ClientArgs_Setup
-	//	*ClientArgs_Mark
-	Argtype isClientArgs_Argtype `protobuf_oneof:"argtype"`
-}
-
-func (m *ClientArgs) Reset()                    { *m = ClientArgs{} }
-func (m *ClientArgs) String() string            { return proto.CompactTextString(m) }
-func (*ClientArgs) ProtoMessage()               {}
-func (*ClientArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
-
-type isClientArgs_Argtype interface {
-	isClientArgs_Argtype()
-}
-
-type ClientArgs_Setup struct {
-	Setup *ClientConfig `protobuf:"bytes,1,opt,name=setup,oneof"`
-}
-type ClientArgs_Mark struct {
-	Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"`
-}
-
-func (*ClientArgs_Setup) isClientArgs_Argtype() {}
-func (*ClientArgs_Mark) isClientArgs_Argtype()  {}
-
-func (m *ClientArgs) GetArgtype() isClientArgs_Argtype {
-	if m != nil {
-		return m.Argtype
-	}
-	return nil
-}
-
-func (m *ClientArgs) GetSetup() *ClientConfig {
-	if x, ok := m.GetArgtype().(*ClientArgs_Setup); ok {
-		return x.Setup
-	}
-	return nil
-}
-
-func (m *ClientArgs) GetMark() *Mark {
-	if x, ok := m.GetArgtype().(*ClientArgs_Mark); ok {
-		return x.Mark
-	}
-	return nil
-}
-
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*ClientArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
-	return _ClientArgs_OneofMarshaler, _ClientArgs_OneofUnmarshaler, _ClientArgs_OneofSizer, []interface{}{
-		(*ClientArgs_Setup)(nil),
-		(*ClientArgs_Mark)(nil),
-	}
-}
-
-func _ClientArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
-	m := msg.(*ClientArgs)
-	// argtype
-	switch x := m.Argtype.(type) {
-	case *ClientArgs_Setup:
-		b.EncodeVarint(1<<3 | proto.WireBytes)
-		if err := b.EncodeMessage(x.Setup); err != nil {
-			return err
-		}
-	case *ClientArgs_Mark:
-		b.EncodeVarint(2<<3 | proto.WireBytes)
-		if err := b.EncodeMessage(x.Mark); err != nil {
-			return err
-		}
-	case nil:
-	default:
-		return fmt.Errorf("ClientArgs.Argtype has unexpected type %T", x)
-	}
-	return nil
-}
-
-func _ClientArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
-	m := msg.(*ClientArgs)
-	switch tag {
-	case 1: // argtype.setup
-		if wire != proto.WireBytes {
-			return true, proto.ErrInternalBadWireType
-		}
-		msg := new(ClientConfig)
-		err := b.DecodeMessage(msg)
-		m.Argtype = &ClientArgs_Setup{msg}
-		return true, err
-	case 2: // argtype.mark
-		if wire != proto.WireBytes {
-			return true, proto.ErrInternalBadWireType
-		}
-		msg := new(Mark)
-		err := b.DecodeMessage(msg)
-		m.Argtype = &ClientArgs_Mark{msg}
-		return true, err
-	default:
-		return false, nil
-	}
-}
-
-func _ClientArgs_OneofSizer(msg proto.Message) (n int) {
-	m := msg.(*ClientArgs)
-	// argtype
-	switch x := m.Argtype.(type) {
-	case *ClientArgs_Setup:
-		s := proto.Size(x.Setup)
-		n += proto.SizeVarint(1<<3 | proto.WireBytes)
-		n += proto.SizeVarint(uint64(s))
-		n += s
-	case *ClientArgs_Mark:
-		s := proto.Size(x.Mark)
-		n += proto.SizeVarint(2<<3 | proto.WireBytes)
-		n += proto.SizeVarint(uint64(s))
-		n += s
-	case nil:
-	default:
-		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
-	}
-	return n
-}
-
-type ClientStats struct {
-	Latencies   *HistogramData `protobuf:"bytes,1,opt,name=latencies" json:"latencies,omitempty"`
-	TimeElapsed float64        `protobuf:"fixed64,3,opt,name=time_elapsed" json:"time_elapsed,omitempty"`
-	TimeUser    float64        `protobuf:"fixed64,4,opt,name=time_user" json:"time_user,omitempty"`
-	TimeSystem  float64        `protobuf:"fixed64,5,opt,name=time_system" json:"time_system,omitempty"`
-}
-
-func (m *ClientStats) Reset()                    { *m = ClientStats{} }
-func (m *ClientStats) String() string            { return proto.CompactTextString(m) }
-func (*ClientStats) ProtoMessage()               {}
-func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
-
-func (m *ClientStats) GetLatencies() *HistogramData {
-	if m != nil {
-		return m.Latencies
-	}
-	return nil
-}
-
-type ClientStatus struct {
-	Stats *ClientStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"`
-}
-
-func (m *ClientStatus) Reset()                    { *m = ClientStatus{} }
-func (m *ClientStatus) String() string            { return proto.CompactTextString(m) }
-func (*ClientStatus) ProtoMessage()               {}
-func (*ClientStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
-
-func (m *ClientStatus) GetStats() *ClientStats {
-	if m != nil {
-		return m.Stats
-	}
-	return nil
-}
-
-type ServerConfig struct {
-	ServerType ServerType `protobuf:"varint,1,opt,name=server_type,enum=grpc.testing.ServerType" json:"server_type,omitempty"`
-	Threads    int32      `protobuf:"varint,2,opt,name=threads" json:"threads,omitempty"`
-	EnableSsl  bool       `protobuf:"varint,3,opt,name=enable_ssl" json:"enable_ssl,omitempty"`
-}
-
-func (m *ServerConfig) Reset()                    { *m = ServerConfig{} }
-func (m *ServerConfig) String() string            { return proto.CompactTextString(m) }
-func (*ServerConfig) ProtoMessage()               {}
-func (*ServerConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
-
-type ServerArgs struct {
-	// Types that are valid to be assigned to Argtype:
-	//	*ServerArgs_Setup
-	//	*ServerArgs_Mark
-	Argtype isServerArgs_Argtype `protobuf_oneof:"argtype"`
-}
-
-func (m *ServerArgs) Reset()                    { *m = ServerArgs{} }
-func (m *ServerArgs) String() string            { return proto.CompactTextString(m) }
-func (*ServerArgs) ProtoMessage()               {}
-func (*ServerArgs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
-
-type isServerArgs_Argtype interface {
-	isServerArgs_Argtype()
-}
-
-type ServerArgs_Setup struct {
-	Setup *ServerConfig `protobuf:"bytes,1,opt,name=setup,oneof"`
-}
-type ServerArgs_Mark struct {
-	Mark *Mark `protobuf:"bytes,2,opt,name=mark,oneof"`
-}
-
-func (*ServerArgs_Setup) isServerArgs_Argtype() {}
-func (*ServerArgs_Mark) isServerArgs_Argtype()  {}
-
-func (m *ServerArgs) GetArgtype() isServerArgs_Argtype {
-	if m != nil {
-		return m.Argtype
-	}
-	return nil
-}
-
-func (m *ServerArgs) GetSetup() *ServerConfig {
-	if x, ok := m.GetArgtype().(*ServerArgs_Setup); ok {
-		return x.Setup
-	}
-	return nil
-}
-
-func (m *ServerArgs) GetMark() *Mark {
-	if x, ok := m.GetArgtype().(*ServerArgs_Mark); ok {
-		return x.Mark
-	}
-	return nil
-}
-
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*ServerArgs) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
-	return _ServerArgs_OneofMarshaler, _ServerArgs_OneofUnmarshaler, _ServerArgs_OneofSizer, []interface{}{
-		(*ServerArgs_Setup)(nil),
-		(*ServerArgs_Mark)(nil),
-	}
-}
-
-func _ServerArgs_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
-	m := msg.(*ServerArgs)
-	// argtype
-	switch x := m.Argtype.(type) {
-	case *ServerArgs_Setup:
-		b.EncodeVarint(1<<3 | proto.WireBytes)
-		if err := b.EncodeMessage(x.Setup); err != nil {
-			return err
-		}
-	case *ServerArgs_Mark:
-		b.EncodeVarint(2<<3 | proto.WireBytes)
-		if err := b.EncodeMessage(x.Mark); err != nil {
-			return err
-		}
-	case nil:
-	default:
-		return fmt.Errorf("ServerArgs.Argtype has unexpected type %T", x)
-	}
-	return nil
-}
-
-func _ServerArgs_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
-	m := msg.(*ServerArgs)
-	switch tag {
-	case 1: // argtype.setup
-		if wire != proto.WireBytes {
-			return true, proto.ErrInternalBadWireType
-		}
-		msg := new(ServerConfig)
-		err := b.DecodeMessage(msg)
-		m.Argtype = &ServerArgs_Setup{msg}
-		return true, err
-	case 2: // argtype.mark
-		if wire != proto.WireBytes {
-			return true, proto.ErrInternalBadWireType
-		}
-		msg := new(Mark)
-		err := b.DecodeMessage(msg)
-		m.Argtype = &ServerArgs_Mark{msg}
-		return true, err
-	default:
-		return false, nil
-	}
-}
-
-func _ServerArgs_OneofSizer(msg proto.Message) (n int) {
-	m := msg.(*ServerArgs)
-	// argtype
-	switch x := m.Argtype.(type) {
-	case *ServerArgs_Setup:
-		s := proto.Size(x.Setup)
-		n += proto.SizeVarint(1<<3 | proto.WireBytes)
-		n += proto.SizeVarint(uint64(s))
-		n += s
-	case *ServerArgs_Mark:
-		s := proto.Size(x.Mark)
-		n += proto.SizeVarint(2<<3 | proto.WireBytes)
-		n += proto.SizeVarint(uint64(s))
-		n += s
-	case nil:
-	default:
-		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
-	}
-	return n
-}
-
-type ServerStatus struct {
-	Stats *ServerStats `protobuf:"bytes,1,opt,name=stats" json:"stats,omitempty"`
-	Port  int32        `protobuf:"varint,2,opt,name=port" json:"port,omitempty"`
-}
-
-func (m *ServerStatus) Reset()                    { *m = ServerStatus{} }
-func (m *ServerStatus) String() string            { return proto.CompactTextString(m) }
-func (*ServerStatus) ProtoMessage()               {}
-func (*ServerStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
-
-func (m *ServerStatus) GetStats() *ServerStats {
-	if m != nil {
-		return m.Stats
-	}
-	return nil
-}
-
-type SimpleRequest struct {
-	// Desired payload type in the response from the server.
-	// If response_type is RANDOM, server randomly chooses one from other formats.
-	ResponseType PayloadType `protobuf:"varint,1,opt,name=response_type,enum=grpc.testing.PayloadType" json:"response_type,omitempty"`
-	// Desired payload size in the response from the server.
-	// If response_type is COMPRESSABLE, this denotes the size before compression.
-	ResponseSize int32 `protobuf:"varint,2,opt,name=response_size" json:"response_size,omitempty"`
-	// Optional input payload sent along with the request.
-	Payload *Payload `protobuf:"bytes,3,opt,name=payload" json:"payload,omitempty"`
-}
-
-func (m *SimpleRequest) Reset()                    { *m = SimpleRequest{} }
-func (m *SimpleRequest) String() string            { return proto.CompactTextString(m) }
-func (*SimpleRequest) ProtoMessage()               {}
-func (*SimpleRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{12} }
-
-func (m *SimpleRequest) GetPayload() *Payload {
-	if m != nil {
-		return m.Payload
-	}
-	return nil
-}
-
-type SimpleResponse struct {
-	Payload *Payload `protobuf:"bytes,1,opt,name=payload" json:"payload,omitempty"`
-}
-
-func (m *SimpleResponse) Reset()                    { *m = SimpleResponse{} }
-func (m *SimpleResponse) String() string            { return proto.CompactTextString(m) }
-func (*SimpleResponse) ProtoMessage()               {}
-func (*SimpleResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{13} }
-
-func (m *SimpleResponse) GetPayload() *Payload {
-	if m != nil {
-		return m.Payload
-	}
-	return nil
-}
-
-func init() {
-	proto.RegisterType((*StatsRequest)(nil), "grpc.testing.StatsRequest")
-	proto.RegisterType((*ServerStats)(nil), "grpc.testing.ServerStats")
-	proto.RegisterType((*Payload)(nil), "grpc.testing.Payload")
-	proto.RegisterType((*HistogramData)(nil), "grpc.testing.HistogramData")
-	proto.RegisterType((*ClientConfig)(nil), "grpc.testing.ClientConfig")
-	proto.RegisterType((*Mark)(nil), "grpc.testing.Mark")
-	proto.RegisterType((*ClientArgs)(nil), "grpc.testing.ClientArgs")
-	proto.RegisterType((*ClientStats)(nil), "grpc.testing.ClientStats")
-	proto.RegisterType((*ClientStatus)(nil), "grpc.testing.ClientStatus")
-	proto.RegisterType((*ServerConfig)(nil), "grpc.testing.ServerConfig")
-	proto.RegisterType((*ServerArgs)(nil), "grpc.testing.ServerArgs")
-	proto.RegisterType((*ServerStatus)(nil), "grpc.testing.ServerStatus")
-	proto.RegisterType((*SimpleRequest)(nil), "grpc.testing.SimpleRequest")
-	proto.RegisterType((*SimpleResponse)(nil), "grpc.testing.SimpleResponse")
-	proto.RegisterEnum("grpc.testing.PayloadType", PayloadType_name, PayloadType_value)
-	proto.RegisterEnum("grpc.testing.ClientType", ClientType_name, ClientType_value)
-	proto.RegisterEnum("grpc.testing.ServerType", ServerType_name, ServerType_value)
-	proto.RegisterEnum("grpc.testing.RpcType", RpcType_name, RpcType_value)
-}
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ context.Context
-var _ grpc.ClientConn
-
-// Client API for TestService service
-
-type TestServiceClient interface {
-	// One request followed by one response.
-	// The server returns the client payload as-is.
-	UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error)
-	// One request followed by one response.
-	// The server returns the client payload as-is.
-	StreamingCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingCallClient, error)
-}
-
-type testServiceClient struct {
-	cc *grpc.ClientConn
-}
-
-func NewTestServiceClient(cc *grpc.ClientConn) TestServiceClient {
-	return &testServiceClient{cc}
-}
-
-func (c *testServiceClient) UnaryCall(ctx context.Context, in *SimpleRequest, opts ...grpc.CallOption) (*SimpleResponse, error) {
-	out := new(SimpleResponse)
-	err := grpc.Invoke(ctx, "/grpc.testing.TestService/UnaryCall", in, out, c.cc, opts...)
-	if err != nil {
-		return nil, err
-	}
-	return out, nil
-}
-
-func (c *testServiceClient) StreamingCall(ctx context.Context, opts ...grpc.CallOption) (TestService_StreamingCallClient, error) {
-	stream, err := grpc.NewClientStream(ctx, &_TestService_serviceDesc.Streams[0], c.cc, "/grpc.testing.TestService/StreamingCall", opts...)
-	if err != nil {
-		return nil, err
-	}
-	x := &testServiceStreamingCallClient{stream}
-	return x, nil
-}
-
-type TestService_StreamingCallClient interface {
-	Send(*SimpleRequest) error
-	Recv() (*SimpleResponse, error)
-	grpc.ClientStream
-}
-
-type testServiceStreamingCallClient struct {
-	grpc.ClientStream
-}
-
-func (x *testServiceStreamingCallClient) Send(m *SimpleRequest) error {
-	return x.ClientStream.SendMsg(m)
-}
-
-func (x *testServiceStreamingCallClient) Recv() (*SimpleResponse, error) {
-	m := new(SimpleResponse)
-	if err := x.ClientStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-// Server API for TestService service
-
-type TestServiceServer interface {
-	// One request followed by one response.
-	// The server returns the client payload as-is.
-	UnaryCall(context.Context, *SimpleRequest) (*SimpleResponse, error)
-	// One request followed by one response.
-	// The server returns the client payload as-is.
-	StreamingCall(TestService_StreamingCallServer) error
-}
-
-func RegisterTestServiceServer(s *grpc.Server, srv TestServiceServer) {
-	s.RegisterService(&_TestService_serviceDesc, srv)
-}
-
-func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
-	in := new(SimpleRequest)
-	if err := dec(in); err != nil {
-		return nil, err
-	}
-	out, err := srv.(TestServiceServer).UnaryCall(ctx, in)
-	if err != nil {
-		return nil, err
-	}
-	return out, nil
-}
-
-func _TestService_StreamingCall_Handler(srv interface{}, stream grpc.ServerStream) error {
-	return srv.(TestServiceServer).StreamingCall(&testServiceStreamingCallServer{stream})
-}
-
-type TestService_StreamingCallServer interface {
-	Send(*SimpleResponse) error
-	Recv() (*SimpleRequest, error)
-	grpc.ServerStream
-}
-
-type testServiceStreamingCallServer struct {
-	grpc.ServerStream
-}
-
-func (x *testServiceStreamingCallServer) Send(m *SimpleResponse) error {
-	return x.ServerStream.SendMsg(m)
-}
-
-func (x *testServiceStreamingCallServer) Recv() (*SimpleRequest, error) {
-	m := new(SimpleRequest)
-	if err := x.ServerStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-var _TestService_serviceDesc = grpc.ServiceDesc{
-	ServiceName: "grpc.testing.TestService",
-	HandlerType: (*TestServiceServer)(nil),
-	Methods: []grpc.MethodDesc{
-		{
-			MethodName: "UnaryCall",
-			Handler:    _TestService_UnaryCall_Handler,
-		},
-	},
-	Streams: []grpc.StreamDesc{
-		{
-			StreamName:    "StreamingCall",
-			Handler:       _TestService_StreamingCall_Handler,
-			ServerStreams: true,
-			ClientStreams: true,
-		},
-	},
-}
-
-// Client API for Worker service
-
-type WorkerClient interface {
-	// Start test with specified workload
-	RunTest(ctx context.Context, opts ...grpc.CallOption) (Worker_RunTestClient, error)
-	// Start test with specified workload
-	RunServer(ctx context.Context, opts ...grpc.CallOption) (Worker_RunServerClient, error)
-}
-
-type workerClient struct {
-	cc *grpc.ClientConn
-}
-
-func NewWorkerClient(cc *grpc.ClientConn) WorkerClient {
-	return &workerClient{cc}
-}
-
-func (c *workerClient) RunTest(ctx context.Context, opts ...grpc.CallOption) (Worker_RunTestClient, error) {
-	stream, err := grpc.NewClientStream(ctx, &_Worker_serviceDesc.Streams[0], c.cc, "/grpc.testing.Worker/RunTest", opts...)
-	if err != nil {
-		return nil, err
-	}
-	x := &workerRunTestClient{stream}
-	return x, nil
-}
-
-type Worker_RunTestClient interface {
-	Send(*ClientArgs) error
-	Recv() (*ClientStatus, error)
-	grpc.ClientStream
-}
-
-type workerRunTestClient struct {
-	grpc.ClientStream
-}
-
-func (x *workerRunTestClient) Send(m *ClientArgs) error {
-	return x.ClientStream.SendMsg(m)
-}
-
-func (x *workerRunTestClient) Recv() (*ClientStatus, error) {
-	m := new(ClientStatus)
-	if err := x.ClientStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-func (c *workerClient) RunServer(ctx context.Context, opts ...grpc.CallOption) (Worker_RunServerClient, error) {
-	stream, err := grpc.NewClientStream(ctx, &_Worker_serviceDesc.Streams[1], c.cc, "/grpc.testing.Worker/RunServer", opts...)
-	if err != nil {
-		return nil, err
-	}
-	x := &workerRunServerClient{stream}
-	return x, nil
-}
-
-type Worker_RunServerClient interface {
-	Send(*ServerArgs) error
-	Recv() (*ServerStatus, error)
-	grpc.ClientStream
-}
-
-type workerRunServerClient struct {
-	grpc.ClientStream
-}
-
-func (x *workerRunServerClient) Send(m *ServerArgs) error {
-	return x.ClientStream.SendMsg(m)
-}
-
-func (x *workerRunServerClient) Recv() (*ServerStatus, error) {
-	m := new(ServerStatus)
-	if err := x.ClientStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-// Server API for Worker service
-
-type WorkerServer interface {
-	// Start test with specified workload
-	RunTest(Worker_RunTestServer) error
-	// Start test with specified workload
-	RunServer(Worker_RunServerServer) error
-}
-
-func RegisterWorkerServer(s *grpc.Server, srv WorkerServer) {
-	s.RegisterService(&_Worker_serviceDesc, srv)
-}
-
-func _Worker_RunTest_Handler(srv interface{}, stream grpc.ServerStream) error {
-	return srv.(WorkerServer).RunTest(&workerRunTestServer{stream})
-}
-
-type Worker_RunTestServer interface {
-	Send(*ClientStatus) error
-	Recv() (*ClientArgs, error)
-	grpc.ServerStream
-}
-
-type workerRunTestServer struct {
-	grpc.ServerStream
-}
-
-func (x *workerRunTestServer) Send(m *ClientStatus) error {
-	return x.ServerStream.SendMsg(m)
-}
-
-func (x *workerRunTestServer) Recv() (*ClientArgs, error) {
-	m := new(ClientArgs)
-	if err := x.ServerStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-func _Worker_RunServer_Handler(srv interface{}, stream grpc.ServerStream) error {
-	return srv.(WorkerServer).RunServer(&workerRunServerServer{stream})
-}
-
-type Worker_RunServerServer interface {
-	Send(*ServerStatus) error
-	Recv() (*ServerArgs, error)
-	grpc.ServerStream
-}
-
-type workerRunServerServer struct {
-	grpc.ServerStream
-}
-
-func (x *workerRunServerServer) Send(m *ServerStatus) error {
-	return x.ServerStream.SendMsg(m)
-}
-
-func (x *workerRunServerServer) Recv() (*ServerArgs, error) {
-	m := new(ServerArgs)
-	if err := x.ServerStream.RecvMsg(m); err != nil {
-		return nil, err
-	}
-	return m, nil
-}
-
-var _Worker_serviceDesc = grpc.ServiceDesc{
-	ServiceName: "grpc.testing.Worker",
-	HandlerType: (*WorkerServer)(nil),
-	Methods:     []grpc.MethodDesc{},
-	Streams: []grpc.StreamDesc{
-		{
-			StreamName:    "RunTest",
-			Handler:       _Worker_RunTest_Handler,
-			ServerStreams: true,
-			ClientStreams: true,
-		},
-		{
-			StreamName:    "RunServer",
-			Handler:       _Worker_RunServer_Handler,
-			ServerStreams: true,
-			ClientStreams: true,
-		},
-	},
-}
-
-var fileDescriptor0 = []byte{
-	// 988 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xa4, 0x56, 0x5f, 0x6f, 0x1b, 0x45,
-	0x10, 0xef, 0xc5, 0xff, 0xe2, 0x39, 0x27, 0x44, 0xab, 0x52, 0x39, 0x69, 0x11, 0x70, 0x05, 0x11,
-	0x22, 0x91, 0x56, 0x46, 0x42, 0xea, 0x0b, 0x91, 0xeb, 0x1a, 0x52, 0x29, 0x71, 0xa2, 0xbd, 0x04,
-	0xd4, 0xa7, 0xd3, 0xc6, 0xde, 0xb8, 0xa7, 0x9c, 0xef, 0xae, 0xb7, 0x7b, 0xa8, 0xe6, 0x09, 0xf1,
-	0x19, 0xf8, 0x0a, 0x3c, 0x20, 0xbe, 0x24, 0xb3, 0xb3, 0x7b, 0x89, 0x9d, 0x9a, 0x36, 0x52, 0x9f,
-	0x72, 0x3b, 0xf3, 0x9b, 0xdf, 0xce, 0xfe, 0xe6, 0x8f, 0x03, 0xa0, 0xa5, 0xd2, 0xfb, 0x79, 0x91,
-	0xe9, 0x8c, 0x75, 0xa6, 0x45, 0x3e, 0xde, 0x37, 0x86, 0x38, 0x9d, 0x06, 0xdf, 0x42, 0x27, 0xd4,
-	0x42, 0x2b, 0x2e, 0xdf, 0x94, 0x68, 0x62, 0xdb, 0xb0, 0x6e, 0x5c, 0x51, 0x5a, 0xce, 0xba, 0xde,
-	0x17, 0xde, 0x6e, 0x83, 0xb7, 0xcc, 0x79, 0x54, 0xce, 0x82, 0x14, 0xfc, 0x50, 0x16, 0xbf, 0xc9,
-	0x82, 0x02, 0xd8, 0x97, 0xd0, 0xd1, 0xf1, 0x4c, 0x46, 0x32, 0x11, 0xb9, 0x92, 0x13, 0x42, 0x7b,
-	0xdc, 0x37, 0xb6, 0xa1, 0x35, 0xb1, 0x87, 0xd0, 0x26, 0x48, 0xa9, 0x64, 0xd1, 0x5d, 0x23, 0xff,
-	0xba, 0x31, 0x9c, 0xe3, 0x99, 0x7d, 0x0e, 0x84, 0x8d, 0xd4, 0x5c, 0x69, 0x39, 0xeb, 0xd6, 0xc8,
-	0x0d, 0xc6, 0x14, 0x92, 0x25, 0x38, 0x82, 0xd6, 0xa9, 0x98, 0x27, 0x99, 0x98, 0xb0, 0xef, 0xa0,
-	0xae, 0xe7, 0xb9, 0xa4, 0x3b, 0x36, 0x7b, 0xdb, 0xfb, 0x8b, 0x4f, 0xd8, 0x77, 0xa0, 0x33, 0x04,
-	0x70, 0x82, 0x31, 0x06, 0xf5, 0x8b, 0x6c, 0x32, 0xa7, 0x2b, 0x3b, 0x9c, 0xbe, 0x83, 0x7f, 0x3d,
-	0xd8, 0x38, 0x8c, 0x95, 0xce, 0xa6, 0x85, 0x98, 0xbd, 0x10, 0x5a, 0xb0, 0x07, 0xd0, 0xbc, 0x28,
-	0xc7, 0x57, 0x52, 0x23, 0x6d, 0x6d, 0x77, 0x83, 0xbb, 0x93, 0x91, 0x60, 0x16, 0xa7, 0x91, 0x92,
-	0x32, 0x75, 0x49, 0xb7, 0xf0, 0x1c, 0xe2, 0x91, 0x5c, 0xe2, 0xad, 0x75, 0xd5, 0x9c, 0x4b, 0xbc,
-	0x25, 0xd7, 0x16, 0xd4, 0x14, 0x6a, 0x56, 0x27, 0xab, 0xf9, 0x64, 0x5f, 0xc1, 0x26, 0xfe, 0x89,
-	0xb2, 0xcb, 0x48, 0xbd, 0x29, 0x45, 0x21, 0x55, 0xb7, 0x41, 0xce, 0x0e, 0x5a, 0x4f, 0x2e, 0x43,
-	0x6b, 0x63, 0xf7, 0xa1, 0x31, 0xce, 0xca, 0x54, 0x77, 0x9b, 0xe4, 0xb4, 0x87, 0xe0, 0x8f, 0x1a,
-	0x74, 0x06, 0x49, 0x2c, 0x53, 0x3d, 0xc8, 0xd2, 0xcb, 0x78, 0xca, 0xbe, 0x46, 0x32, 0x12, 0x3f,
-	0xd2, 0xa2, 0x98, 0x4a, 0xad, 0x28, 0xe9, 0x36, 0xdf, 0xb0, 0xd6, 0x33, 0x6b, 0x64, 0xcf, 0xc0,
-	0x1f, 0x53, 0x58, 0x44, 0x7a, 0xad, 0x91, 0x5e, 0xdd, 0x65, 0xbd, 0x2c, 0x2f, 0xc9, 0x05, 0xe3,
-	0xeb, 0x6f, 0xf6, 0x19, 0x80, 0x4c, 0xc5, 0x45, 0x82, 0x15, 0x51, 0x09, 0xbd, 0x6e, 0x9d, 0xb7,
-	0xad, 0x25, 0x54, 0x09, 0x3b, 0x80, 0x47, 0x59, 0xa9, 0x95, 0x16, 0xe9, 0x04, 0x49, 0x22, 0x24,
-	0x54, 0x51, 0x8e, 0xe9, 0x8c, 0x5f, 0x8b, 0x34, 0x95, 0x09, 0x3d, 0xbc, 0xc1, 0xb7, 0x17, 0x30,
-	0x1c, 0x21, 0xa7, 0xb2, 0x18, 0x58, 0x00, 0xfb, 0x06, 0x3e, 0x71, 0xa9, 0xb9, 0x10, 0xab, 0x47,
-	0x83, 0x6f, 0x5a, 0xb3, 0xc3, 0x51, 0x63, 0xe5, 0xb6, 0xa4, 0x91, 0x8a, 0x7f, 0x97, 0x24, 0x4c,
-	0x83, 0xfb, 0xce, 0x16, 0xa2, 0x89, 0x3d, 0x85, 0xfb, 0x42, 0xcd, 0xd3, 0x71, 0x54, 0x3d, 0xf6,
-	0x75, 0x21, 0xc5, 0x44, 0x75, 0x5b, 0x04, 0x65, 0xe4, 0x73, 0xcf, 0xb4, 0x1e, 0x8c, 0x58, 0xc7,
-	0x94, 0xad, 0x2a, 0xeb, 0xa4, 0xca, 0xa7, 0xcb, 0xaa, 0x60, 0xb6, 0x24, 0x49, 0xab, 0xb0, 0x1f,
-	0x41, 0x13, 0xea, 0xc7, 0xa2, 0xb8, 0x0a, 0x4a, 0x00, 0x4b, 0xd5, 0x2f, 0xa6, 0x8a, 0xf5, 0xa0,
-	0xa1, 0xa4, 0x2e, 0x73, 0x6a, 0x45, 0xbf, 0xb7, 0xb3, 0x4a, 0x5a, 0x5b, 0xb2, 0xc3, 0x7b, 0xdc,
-	0x42, 0xd9, 0x2e, 0xd4, 0x67, 0xc8, 0x44, 0xd5, 0xf0, 0x7b, 0x6c, 0x39, 0xc4, 0xdc, 0x81, 0x50,
-	0x42, 0x3c, 0x6f, 0x43, 0x0b, 0x0b, 0x69, 0x92, 0x0c, 0xfe, 0xf1, 0xc0, 0xb7, 0x74, 0x76, 0xdc,
-	0x9e, 0x41, 0x3b, 0x11, 0x5a, 0xa6, 0xe3, 0x58, 0x2a, 0x77, 0xf9, 0xc3, 0x65, 0xa6, 0xa5, 0xee,
-	0xe6, 0x37, 0xe8, 0x77, 0x26, 0xb5, 0xf6, 0x81, 0x49, 0xad, 0xbf, 0x7f, 0x52, 0x1b, 0xef, 0x4c,
-	0xea, 0x41, 0xd5, 0xac, 0x26, 0xd5, 0x52, 0xb1, 0x27, 0x28, 0x92, 0x49, 0xda, 0xe5, 0xb9, 0xbd,
-	0x4a, 0x24, 0xbb, 0x75, 0x2c, 0x2e, 0xf8, 0xd3, 0xc3, 0x35, 0x44, 0x8d, 0xec, 0xda, 0x1d, 0xfb,
-	0xb8, 0x6a, 0xf7, 0x9b, 0xb9, 0xbf, 0xd5, 0xc7, 0x36, 0xc0, 0xf6, 0xb1, 0xba, 0xfe, 0x66, 0x5d,
-	0x68, 0x55, 0xed, 0xb0, 0xe6, 0x16, 0x98, 0xeb, 0x81, 0xf7, 0x77, 0xb8, 0x29, 0xb4, 0xa5, 0xbc,
-	0x43, 0xa1, 0x17, 0x93, 0xfd, 0xc8, 0x42, 0x87, 0xd5, 0xd3, 0xef, 0x24, 0xde, 0xc2, 0x06, 0x76,
-	0xe2, 0x99, 0x6d, 0x97, 0x67, 0x85, 0x76, 0xaf, 0xa5, 0xef, 0xe0, 0x6f, 0xdc, 0x76, 0x61, 0x3c,
-	0xcb, 0x13, 0x59, 0x2d, 0xf6, 0x1f, 0x61, 0x03, 0xd7, 0x4d, 0x9e, 0xa5, 0x4a, 0x46, 0x77, 0xdb,
-	0xa5, 0x9d, 0x0a, 0x4f, 0xb2, 0x3e, 0x5e, 0x88, 0xa7, 0xb1, 0xb4, 0xd7, 0x5d, 0x83, 0x68, 0x2e,
-	0x9f, 0x40, 0xcb, 0x8d, 0x29, 0xc9, 0xeb, 0xdf, 0x1e, 0x32, 0x47, 0xcf, 0x2b, 0x54, 0xd0, 0x87,
-	0xcd, 0x2a, 0x4d, 0x4b, 0xb3, 0x48, 0xe1, 0xdd, 0x85, 0x62, 0xef, 0x00, 0xfc, 0x85, 0xac, 0x71,
-	0x0f, 0x77, 0x06, 0x27, 0xc7, 0xa7, 0x7c, 0x18, 0x86, 0xfd, 0xe7, 0x47, 0xc3, 0xad, 0x7b, 0xa8,
-	0xcf, 0xe6, 0xf9, 0x68, 0xc9, 0xe6, 0x31, 0x80, 0x26, 0xef, 0x8f, 0x5e, 0x9c, 0x1c, 0x6f, 0xad,
-	0xed, 0xfd, 0x50, 0x0d, 0x38, 0xc5, 0x3f, 0x00, 0x16, 0xbe, 0x1a, 0x0d, 0x0e, 0xf9, 0xc9, 0xe8,
-	0xe4, 0x3c, 0x8c, 0x06, 0x47, 0x2f, 0x87, 0xa3, 0x33, 0x64, 0x41, 0xde, 0xbe, 0x71, 0x54, 0x16,
-	0xcf, 0xc4, 0xdd, 0xb4, 0xe0, 0xed, 0xb8, 0x70, 0xc8, 0x7f, 0x19, 0xf2, 0xc5, 0x38, 0x67, 0xf1,
-	0xf6, 0x1e, 0x43, 0xcb, 0x2d, 0x1b, 0xd6, 0x86, 0xc6, 0xf9, 0xa8, 0xcf, 0x5f, 0x21, 0x6e, 0x03,
-	0xda, 0xe1, 0x19, 0x1f, 0xf6, 0x8f, 0x5f, 0x8e, 0x7e, 0xde, 0xf2, 0x7a, 0x58, 0x40, 0xff, 0x0c,
-	0x9f, 0x6c, 0x6e, 0x88, 0xc7, 0x92, 0xfd, 0x04, 0xed, 0xf3, 0x54, 0x14, 0xf3, 0x81, 0x48, 0x12,
-	0x76, 0x6b, 0xf0, 0x97, 0x0a, 0xbd, 0xf3, 0x68, 0xb5, 0xd3, 0xc9, 0x3b, 0xc2, 0xbe, 0xd0, 0x38,
-	0x0e, 0xf8, 0x8b, 0x36, 0xfd, 0x48, 0xae, 0x5d, 0xef, 0xa9, 0xd7, 0xfb, 0xcb, 0x83, 0xe6, 0xaf,
-	0x59, 0x71, 0x85, 0x6b, 0x62, 0x80, 0xef, 0x2a, 0x53, 0x93, 0x34, 0x5b, 0xf9, 0x8b, 0x63, 0xc6,
-	0x6a, 0x67, 0xe7, 0xff, 0x76, 0x41, 0xa9, 0x0c, 0x1f, 0x1b, 0x42, 0x1b, 0x49, 0xac, 0xae, 0x6c,
-	0xe5, 0xc0, 0xaf, 0xa2, 0x59, 0x1c, 0x20, 0x43, 0x73, 0xd1, 0xa4, 0xff, 0x75, 0xbe, 0xff, 0x2f,
-	0x00, 0x00, 0xff, 0xff, 0xe3, 0xb1, 0x00, 0x4d, 0xf9, 0x08, 0x00, 0x00,
-}
diff --git a/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.proto b/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.proto
deleted file mode 100644
index b0b2f80..0000000
--- a/go/src/google.golang.org/grpc/benchmark/grpc_testing/test.proto
+++ /dev/null
@@ -1,148 +0,0 @@
-// An integration test service that covers all the method signature permutations
-// of unary/streaming requests/responses.
-syntax = "proto3";
-
-package grpc.testing;
-
-enum PayloadType {
-    // Compressable text format.
-    COMPRESSABLE = 0;
-
-    // Uncompressable binary format.
-    UNCOMPRESSABLE = 1;
-
-    // Randomly chosen from all other formats defined in this enum.
-    RANDOM = 2;
-}
-
-message StatsRequest {
-    // run number
-    int32 test_num = 1;
-}
-
-message ServerStats {
-    // wall clock time
-    double time_elapsed = 1;
-
-    // user time used by the server process and threads
-    double time_user = 2;
-
-    // server time used by the server process and all threads
-    double time_system = 3;
-}
-
-message Payload {
-    // The type of data in body.
-    PayloadType type = 1;
-    // Primary contents of payload.
-    bytes body = 2;
-}
-
-message HistogramData {
-    repeated uint32 bucket = 1;
-    double min_seen = 2;
-    double max_seen = 3;
-    double sum = 4;
-    double sum_of_squares = 5;
-    double count = 6;
-}
-
-enum ClientType {
-    SYNCHRONOUS_CLIENT = 0;
-    ASYNC_CLIENT = 1;
-}
-
-enum ServerType {
-    SYNCHRONOUS_SERVER = 0;
-    ASYNC_SERVER = 1;
-}
-
-enum RpcType {
-    UNARY = 0;
-    STREAMING = 1;
-}
-
-message ClientConfig {
-    repeated string server_targets = 1;
-    ClientType client_type = 2;
-    bool enable_ssl = 3;
-    int32 outstanding_rpcs_per_channel = 4;
-    int32 client_channels = 5;
-    int32 payload_size = 6;
-    // only for async client:
-    int32 async_client_threads = 7;
-    RpcType rpc_type = 8;
-}
-
-// Request current stats
-message Mark {}
-
-message ClientArgs {
-    oneof argtype {
-        ClientConfig setup = 1;
-        Mark mark = 2;
-    }
-}
-
-message ClientStats {
-    HistogramData latencies = 1;
-    double time_elapsed = 3;
-    double time_user = 4;
-    double time_system = 5;
-}
-
-message ClientStatus {
-    ClientStats stats = 1;
-}
-
-message ServerConfig {
-    ServerType server_type = 1;
-    int32 threads = 2;
-    bool enable_ssl = 3;
-}
-
-message ServerArgs {
-    oneof argtype {
-        ServerConfig setup = 1;
-        Mark mark = 2;
-    }
-}
-
-message ServerStatus {
-    ServerStats stats = 1;
-    int32 port = 2;
-}
-
-message SimpleRequest {
-    // Desired payload type in the response from the server.
-    // If response_type is RANDOM, server randomly chooses one from other formats.
-    PayloadType response_type = 1;
-
-    // Desired payload size in the response from the server.
-    // If response_type is COMPRESSABLE, this denotes the size before compression.
-    int32 response_size = 2;
-
-    // Optional input payload sent along with the request.
-    Payload payload = 3;
-}
-
-message SimpleResponse {
-    Payload payload = 1;
-}
-
-service TestService {
-    // One request followed by one response.
-    // The server returns the client payload as-is.
-    rpc UnaryCall(SimpleRequest) returns (SimpleResponse);
-
-    // One request followed by one response.
-    // The server returns the client payload as-is.
-    rpc StreamingCall(stream SimpleRequest) returns (stream SimpleResponse);
-}
-
-service Worker {
-    // Start test with specified workload
-    rpc RunTest(stream ClientArgs) returns (stream ClientStatus);
-    // Start test with specified workload
-    rpc RunServer(stream ServerArgs) returns (stream ServerStatus);
-}
diff --git a/go/src/google.golang.org/grpc/benchmark/server/main.go b/go/src/google.golang.org/grpc/benchmark/server/main.go
index 090f002..d43aad0 100644
--- a/go/src/google.golang.org/grpc/benchmark/server/main.go
+++ b/go/src/google.golang.org/grpc/benchmark/server/main.go
@@ -28,7 +28,7 @@
 			grpclog.Fatalf("Failed to serve: %v", err)
 		}
 	}()
-	addr, stopper := benchmark.StartServer(":0") // listen on all interfaces
+	addr, stopper := benchmark.StartServer(benchmark.ServerInfo{Addr: ":0", Type: "protobuf"}) // listen on all interfaces
 	grpclog.Println("Server Address: ", addr)
 	<-time.After(time.Duration(*duration) * time.Second)
 	stopper()
diff --git a/go/src/google.golang.org/grpc/benchmark/server/testdata/ca.pem b/go/src/google.golang.org/grpc/benchmark/server/testdata/ca.pem
new file mode 100644
index 0000000..6c8511a
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/server/testdata/ca.pem
@@ -0,0 +1,15 @@
+-----BEGIN CERTIFICATE-----
+MIICSjCCAbOgAwIBAgIJAJHGGR4dGioHMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
+BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX
+aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnRlc3RjYTAeFw0xNDExMTEyMjMxMjla
+Fw0yNDExMDgyMjMxMjlaMFYxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0
+YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxDzANBgNVBAMT
+BnRlc3RjYTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAwEDfBV5MYdlHVHJ7
++L4nxrZy7mBfAVXpOc5vMYztssUI7mL2/iYujiIXM+weZYNTEpLdjyJdu7R5gGUu
+g1jSVK/EPHfc74O7AyZU34PNIP4Sh33N+/A5YexrNgJlPY+E3GdVYi4ldWJjgkAd
+Qah2PH5ACLrIIC6tRka9hcaBlIECAwEAAaMgMB4wDAYDVR0TBAUwAwEB/zAOBgNV
+HQ8BAf8EBAMCAgQwDQYJKoZIhvcNAQELBQADgYEAHzC7jdYlzAVmddi/gdAeKPau
+sPBG/C2HCWqHzpCUHcKuvMzDVkY/MP2o6JIW2DBbY64bO/FceExhjcykgaYtCH/m
+oIU63+CFOTtR7otyQAWHqXa7q4SbCDlG7DyRFxqG0txPtGvy12lgldA2+RgcigQG
+Dfcog5wrJytaQ6UA0wE=
+-----END CERTIFICATE-----
diff --git a/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.key b/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.key
new file mode 100644
index 0000000..143a5b8
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.key
@@ -0,0 +1,16 @@
+-----BEGIN PRIVATE KEY-----
+MIICdQIBADANBgkqhkiG9w0BAQEFAASCAl8wggJbAgEAAoGBAOHDFScoLCVJpYDD
+M4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1BgzkWF+slf
+3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd9N8YwbBY
+AckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAECgYAn7qGnM2vbjJNBm0VZCkOkTIWm
+V10okw7EPJrdL2mkre9NasghNXbE1y5zDshx5Nt3KsazKOxTT8d0Jwh/3KbaN+YY
+tTCbKGW0pXDRBhwUHRcuRzScjli8Rih5UOCiZkhefUTcRb6xIhZJuQy71tjaSy0p
+dHZRmYyBYO2YEQ8xoQJBAPrJPhMBkzmEYFtyIEqAxQ/o/A6E+E4w8i+KM7nQCK7q
+K4JXzyXVAjLfyBZWHGM2uro/fjqPggGD6QH1qXCkI4MCQQDmdKeb2TrKRh5BY1LR
+81aJGKcJ2XbcDu6wMZK4oqWbTX2KiYn9GB0woM6nSr/Y6iy1u145YzYxEV/iMwff
+DJULAkB8B2MnyzOg0pNFJqBJuH29bKCcHa8gHJzqXhNO5lAlEbMK95p/P2Wi+4Hd
+aiEIAF1BF326QJcvYKmwSmrORp85AkAlSNxRJ50OWrfMZnBgzVjDx3xG6KsFQVk2
+ol6VhqL6dFgKUORFUWBvnKSyhjJxurlPEahV6oo6+A+mPhFY8eUvAkAZQyTdupP3
+XEFQKctGz+9+gKkemDp7LBBMEMBXrGTLPhpEfcjv/7KPdnFHYmhYeBTBnuVmTVWe
+F98XJ7tIFfJq
+-----END PRIVATE KEY-----
diff --git a/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.pem b/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.pem
new file mode 100644
index 0000000..f3d43fc
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/server/testdata/server1.pem
@@ -0,0 +1,16 @@
+-----BEGIN CERTIFICATE-----
+MIICnDCCAgWgAwIBAgIBBzANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJBVTET
+MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ
+dHkgTHRkMQ8wDQYDVQQDEwZ0ZXN0Y2EwHhcNMTUxMTA0MDIyMDI0WhcNMjUxMTAx
+MDIyMDI0WjBlMQswCQYDVQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNV
+BAcTB0NoaWNhZ28xFTATBgNVBAoTDEV4YW1wbGUsIENvLjEaMBgGA1UEAxQRKi50
+ZXN0Lmdvb2dsZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAOHDFSco
+LCVJpYDDM4HYtIdV6Ake/sMNaaKdODjDMsux/4tDydlumN+fm+AjPEK5GHhGn1Bg
+zkWF+slf3BxhrA/8dNsnunstVA7ZBgA/5qQxMfGAq4wHNVX77fBZOgp9VlSMVfyd
+9N8YwbBYAckOeUQadTi2X1S6OgJXgQ0m3MWhAgMBAAGjazBpMAkGA1UdEwQCMAAw
+CwYDVR0PBAQDAgXgME8GA1UdEQRIMEaCECoudGVzdC5nb29nbGUuZnKCGHdhdGVy
+em9vaS50ZXN0Lmdvb2dsZS5iZYISKi50ZXN0LnlvdXR1YmUuY29thwTAqAEDMA0G
+CSqGSIb3DQEBCwUAA4GBAJFXVifQNub1LUP4JlnX5lXNlo8FxZ2a12AFQs+bzoJ6
+hM044EDjqyxUqSbVePK0ni3w1fHQB5rY9yYC5f8G7aqqTY1QOhoUk8ZTSTRpnkTh
+y4jjdvTZeLDVBlueZUTDRmy2feY5aZIU18vFDK08dTG0A87pppuv1LNIR3loveU8
+-----END CERTIFICATE-----
diff --git a/go/src/google.golang.org/grpc/benchmark/stats/counter.go b/go/src/google.golang.org/grpc/benchmark/stats/counter.go
deleted file mode 100644
index 4389bae..0000000
--- a/go/src/google.golang.org/grpc/benchmark/stats/counter.go
+++ /dev/null
@@ -1,135 +0,0 @@
-package stats
-
-import (
-	"sync"
-	"time"
-)
-
-var (
-	// TimeNow is used for testing.
-	TimeNow = time.Now
-)
-
-const (
-	hour       = 0
-	tenminutes = 1
-	minute     = 2
-)
-
-// Counter is a counter that keeps track of its recent values over a given
-// period of time, and with a given resolution. Use newCounter() to instantiate.
-type Counter struct {
-	mu         sync.RWMutex
-	ts         [3]*timeseries
-	lastUpdate time.Time
-}
-
-// newCounter returns a new Counter.
-func newCounter() *Counter {
-	now := TimeNow()
-	c := &Counter{}
-	c.ts[hour] = newTimeSeries(now, time.Hour, time.Minute)
-	c.ts[tenminutes] = newTimeSeries(now, 10*time.Minute, 10*time.Second)
-	c.ts[minute] = newTimeSeries(now, time.Minute, time.Second)
-	return c
-}
-
-func (c *Counter) advance() time.Time {
-	now := TimeNow()
-	for _, ts := range c.ts {
-		ts.advanceTime(now)
-	}
-	return now
-}
-
-// Value returns the current value of the counter.
-func (c *Counter) Value() int64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	return c.ts[minute].headValue()
-}
-
-// LastUpdate returns the last update time of the counter.
-func (c *Counter) LastUpdate() time.Time {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	return c.lastUpdate
-}
-
-// Set updates the current value of the counter.
-func (c *Counter) Set(value int64) {
-	c.mu.Lock()
-	defer c.mu.Unlock()
-	c.lastUpdate = c.advance()
-	for _, ts := range c.ts {
-		ts.set(value)
-	}
-}
-
-// Incr increments the current value of the counter by 'delta'.
-func (c *Counter) Incr(delta int64) {
-	c.mu.Lock()
-	defer c.mu.Unlock()
-	c.lastUpdate = c.advance()
-	for _, ts := range c.ts {
-		ts.incr(delta)
-	}
-}
-
-// Delta1h returns the delta for the last hour.
-func (c *Counter) Delta1h() int64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[hour].delta()
-}
-
-// Delta10m returns the delta for the last 10 minutes.
-func (c *Counter) Delta10m() int64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[tenminutes].delta()
-}
-
-// Delta1m returns the delta for the last minute.
-func (c *Counter) Delta1m() int64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[minute].delta()
-}
-
-// Rate1h returns the rate of change of the counter in the last hour.
-func (c *Counter) Rate1h() float64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[hour].rate()
-}
-
-// Rate10m returns the rate of change of the counter in the last 10 minutes.
-func (c *Counter) Rate10m() float64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[tenminutes].rate()
-}
-
-// Rate1m returns the rate of change of the counter in the last minute.
-func (c *Counter) Rate1m() float64 {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
-	c.advance()
-	return c.ts[minute].rate()
-}
-
-// Reset resets the counter to an empty state.
-func (c *Counter) Reset() {
-	c.mu.Lock()
-	defer c.mu.Unlock()
-	now := TimeNow()
-	for _, ts := range c.ts {
-		ts.reset(now)
-	}
-}
diff --git a/go/src/google.golang.org/grpc/benchmark/stats/histogram.go b/go/src/google.golang.org/grpc/benchmark/stats/histogram.go
index 727808c..099bcd6 100644
--- a/go/src/google.golang.org/grpc/benchmark/stats/histogram.go
+++ b/go/src/google.golang.org/grpc/benchmark/stats/histogram.go
@@ -4,55 +4,108 @@
 	"bytes"
 	"fmt"
 	"io"
+	"log"
+	"math"
 	"strconv"
 	"strings"
-	"time"
 )
 
-// HistogramValue is the value of Histogram objects.
-type HistogramValue struct {
+// Histogram accumulates values in the form of a histogram with
+// exponentially increased bucket sizes.
+type Histogram struct {
 	// Count is the total number of values added to the histogram.
 	Count int64
 	// Sum is the sum of all the values added to the histogram.
 	Sum int64
+	// SumOfSquares is the sum of squares of all values.
+	SumOfSquares int64
 	// Min is the minimum of all the values added to the histogram.
 	Min int64
 	// Max is the maximum of all the values added to the histogram.
 	Max int64
 	// Buckets contains all the buckets of the histogram.
 	Buckets []HistogramBucket
+
+	opts                          HistogramOptions
+	logBaseBucketSize             float64
+	oneOverLogOnePlusGrowthFactor float64
 }
 
-// HistogramBucket is one histogram bucket.
+// HistogramOptions contains the parameters that define the histogram's buckets.
+// The first bucket of the created histogram (with index 0) contains [min, min+n)
+// where n = BaseBucketSize, min = MinValue.
+// Bucket i (i>=1) contains [min + n * m^(i-1), min + n * m^i), where m = 1+GrowthFactor.
+// The type of the values is int64.
+type HistogramOptions struct {
+	// NumBuckets is the number of buckets.
+	NumBuckets int
+	// GrowthFactor is the growth factor of the buckets. A value of 0.1
+	// indicates that bucket N+1 will be 10% larger than bucket N.
+	GrowthFactor float64
+	// BaseBucketSize is the size of the first bucket.
+	BaseBucketSize float64
+	// MinValue is the lower bound of the first bucket.
+	MinValue int64
+}
+
+// HistogramBucket represents one histogram bucket.
 type HistogramBucket struct {
 	// LowBound is the lower bound of the bucket.
-	LowBound int64
+	LowBound float64
 	// Count is the number of values in the bucket.
 	Count int64
 }
 
+// NewHistogram returns a pointer to a new Histogram object that was created
+// with the provided options.
+func NewHistogram(opts HistogramOptions) *Histogram {
+	if opts.NumBuckets == 0 {
+		opts.NumBuckets = 32
+	}
+	if opts.BaseBucketSize == 0.0 {
+		opts.BaseBucketSize = 1.0
+	}
+	h := Histogram{
+		Buckets: make([]HistogramBucket, opts.NumBuckets),
+		Min:     math.MaxInt64,
+		Max:     math.MinInt64,
+
+		opts:                          opts,
+		logBaseBucketSize:             math.Log(opts.BaseBucketSize),
+		oneOverLogOnePlusGrowthFactor: 1 / math.Log(1+opts.GrowthFactor),
+	}
+	m := 1.0 + opts.GrowthFactor
+	delta := opts.BaseBucketSize
+	h.Buckets[0].LowBound = float64(opts.MinValue)
+	for i := 1; i < opts.NumBuckets; i++ {
+		h.Buckets[i].LowBound = float64(opts.MinValue) + delta
+		delta = delta * m
+	}
+	return &h
+}
+
 // Print writes textual output of the histogram values.
-func (v HistogramValue) Print(w io.Writer) {
-	avg := float64(v.Sum) / float64(v.Count)
-	fmt.Fprintf(w, "Count: %d  Min: %d  Max: %d  Avg: %.2f\n", v.Count, v.Min, v.Max, avg)
+func (h *Histogram) Print(w io.Writer) {
+	avg := float64(h.Sum) / float64(h.Count)
+	fmt.Fprintf(w, "Count: %d  Min: %d  Max: %d  Avg: %.2f\n", h.Count, h.Min, h.Max, avg)
 	fmt.Fprintf(w, "%s\n", strings.Repeat("-", 60))
-	if v.Count <= 0 {
+	if h.Count <= 0 {
 		return
 	}
 
-	maxBucketDigitLen := len(strconv.FormatInt(v.Buckets[len(v.Buckets)-1].LowBound, 10))
+	maxBucketDigitLen := len(strconv.FormatFloat(h.Buckets[len(h.Buckets)-1].LowBound, 'f', 6, 64))
 	if maxBucketDigitLen < 3 {
 		// For "inf".
 		maxBucketDigitLen = 3
 	}
-	maxCountDigitLen := len(strconv.FormatInt(v.Count, 10))
-	percentMulti := 100 / float64(v.Count)
+	maxCountDigitLen := len(strconv.FormatInt(h.Count, 10))
+	percentMulti := 100 / float64(h.Count)
 
 	accCount := int64(0)
-	for i, b := range v.Buckets {
-		fmt.Fprintf(w, "[%*d, ", maxBucketDigitLen, b.LowBound)
-		if i+1 < len(v.Buckets) {
-			fmt.Fprintf(w, "%*d)", maxBucketDigitLen, v.Buckets[i+1].LowBound)
+	for i, b := range h.Buckets {
+		fmt.Fprintf(w, "[%*f, ", maxBucketDigitLen, b.LowBound)
+		if i+1 < len(h.Buckets) {
+			fmt.Fprintf(w, "%*f)", maxBucketDigitLen, h.Buckets[i+1].LowBound)
 		} else {
 			fmt.Fprintf(w, "%*s)", maxBucketDigitLen, "inf")
 		}
@@ -67,70 +120,22 @@
 }
 
 // String returns the textual output of the histogram values as string.
-func (v HistogramValue) String() string {
+func (h *Histogram) String() string {
 	var b bytes.Buffer
-	v.Print(&b)
+	h.Print(&b)
 	return b.String()
 }
 
-// A Histogram accumulates values in the form of a histogram. The type of the
-// values is int64, which is suitable for keeping track of things like RPC
-// latency in milliseconds. New histogram objects should be obtained via the
-// New() function.
-type Histogram struct {
-	opts    HistogramOptions
-	buckets []bucketInternal
-	count   *Counter
-	sum     *Counter
-	tracker *Tracker
-}
-
-// HistogramOptions contains the parameters that define the histogram's buckets.
-type HistogramOptions struct {
-	// NumBuckets is the number of buckets.
-	NumBuckets int
-	// GrowthFactor is the growth factor of the buckets. A value of 0.1
-	// indicates that bucket N+1 will be 10% larger than bucket N.
-	GrowthFactor float64
-	// SmallestBucketSize is the size of the first bucket. Bucket sizes are
-	// rounded down to the nearest integer.
-	SmallestBucketSize float64
-	// MinValue is the lower bound of the first bucket.
-	MinValue int64
-}
-
-// bucketInternal is the internal representation of a bucket, which includes a
-// rate counter.
-type bucketInternal struct {
-	lowBound int64
-	count    *Counter
-}
-
-// NewHistogram returns a pointer to a new Histogram object that was created
-// with the provided options.
-func NewHistogram(opts HistogramOptions) *Histogram {
-	if opts.NumBuckets == 0 {
-		opts.NumBuckets = 32
+// Clear resets all the content of histogram.
+func (h *Histogram) Clear() {
+	h.Count = 0
+	h.Sum = 0
+	h.SumOfSquares = 0
+	h.Min = math.MaxInt64
+	h.Max = math.MinInt64
+	for _, v := range h.Buckets {
+		v.Count = 0
 	}
-	if opts.SmallestBucketSize == 0.0 {
-		opts.SmallestBucketSize = 1.0
-	}
-	h := Histogram{
-		opts:    opts,
-		buckets: make([]bucketInternal, opts.NumBuckets),
-		count:   newCounter(),
-		sum:     newCounter(),
-		tracker: newTracker(),
-	}
-	low := opts.MinValue
-	delta := opts.SmallestBucketSize
-	for i := 0; i < opts.NumBuckets; i++ {
-		h.buckets[i].lowBound = low
-		h.buckets[i].count = newCounter()
-		low = low + int64(delta)
-		delta = delta * (1.0 + opts.GrowthFactor)
-	}
-	return &h
 }
 
 // Opts returns a copy of the options used to create the Histogram.
@@ -144,112 +149,50 @@
 	if err != nil {
 		return err
 	}
-	h.buckets[bucket].count.Incr(1)
-	h.count.Incr(1)
-	h.sum.Incr(value)
-	h.tracker.Push(value)
+	h.Buckets[bucket].Count++
+	h.Count++
+	h.Sum += value
+	h.SumOfSquares += value * value
+	if value < h.Min {
+		h.Min = value
+	}
+	if value > h.Max {
+		h.Max = value
+	}
 	return nil
 }
 
-// LastUpdate returns the time at which the object was last updated.
-func (h *Histogram) LastUpdate() time.Time {
-	return h.count.LastUpdate()
-}
-
-// Value returns the accumulated state of the histogram since it was created.
-func (h *Histogram) Value() HistogramValue {
-	b := make([]HistogramBucket, len(h.buckets))
-	for i, v := range h.buckets {
-		b[i] = HistogramBucket{
-			LowBound: v.lowBound,
-			Count:    v.count.Value(),
-		}
-	}
-
-	v := HistogramValue{
-		Count:   h.count.Value(),
-		Sum:     h.sum.Value(),
-		Min:     h.tracker.Min(),
-		Max:     h.tracker.Max(),
-		Buckets: b,
-	}
-	return v
-}
-
-// Delta1h returns the change in the last hour.
-func (h *Histogram) Delta1h() HistogramValue {
-	b := make([]HistogramBucket, len(h.buckets))
-	for i, v := range h.buckets {
-		b[i] = HistogramBucket{
-			LowBound: v.lowBound,
-			Count:    v.count.Delta1h(),
-		}
-	}
-
-	v := HistogramValue{
-		Count:   h.count.Delta1h(),
-		Sum:     h.sum.Delta1h(),
-		Min:     h.tracker.Min1h(),
-		Max:     h.tracker.Max1h(),
-		Buckets: b,
-	}
-	return v
-}
-
-// Delta10m returns the change in the last 10 minutes.
-func (h *Histogram) Delta10m() HistogramValue {
-	b := make([]HistogramBucket, len(h.buckets))
-	for i, v := range h.buckets {
-		b[i] = HistogramBucket{
-			LowBound: v.lowBound,
-			Count:    v.count.Delta10m(),
-		}
-	}
-
-	v := HistogramValue{
-		Count:   h.count.Delta10m(),
-		Sum:     h.sum.Delta10m(),
-		Min:     h.tracker.Min10m(),
-		Max:     h.tracker.Max10m(),
-		Buckets: b,
-	}
-	return v
-}
-
-// Delta1m returns the change in the last 10 minutes.
-func (h *Histogram) Delta1m() HistogramValue {
-	b := make([]HistogramBucket, len(h.buckets))
-	for i, v := range h.buckets {
-		b[i] = HistogramBucket{
-			LowBound: v.lowBound,
-			Count:    v.count.Delta1m(),
-		}
-	}
-
-	v := HistogramValue{
-		Count:   h.count.Delta1m(),
-		Sum:     h.sum.Delta1m(),
-		Min:     h.tracker.Min1m(),
-		Max:     h.tracker.Max1m(),
-		Buckets: b,
-	}
-	return v
-}
-
-// findBucket does a binary search to find in which bucket the value goes.
 func (h *Histogram) findBucket(value int64) (int, error) {
-	lastBucket := len(h.buckets) - 1
-	min, max := 0, lastBucket
-	for max >= min {
-		b := (min + max) / 2
-		if value >= h.buckets[b].lowBound && (b == lastBucket || value < h.buckets[b+1].lowBound) {
-			return b, nil
-		}
-		if value < h.buckets[b].lowBound {
-			max = b - 1
-			continue
-		}
-		min = b + 1
+	delta := float64(value - h.opts.MinValue)
+	var b int
+	if delta >= h.opts.BaseBucketSize {
+		// b = log_{1+growthFactor} (delta / baseBucketSize) + 1
+		//   = log(delta / baseBucketSize) / log(1+growthFactor) + 1
+		//   = (log(delta) - log(baseBucketSize)) * (1 / log(1+growthFactor)) + 1
+		b = int((math.Log(delta)-h.logBaseBucketSize)*h.oneOverLogOnePlusGrowthFactor + 1)
 	}
-	return 0, fmt.Errorf("no bucket for value: %d", value)
+	if b >= len(h.Buckets) {
+		return 0, fmt.Errorf("no bucket for value: %d", value)
+	}
+	return b, nil
+}
+
+// Merge takes another histogram h2, and merges its content into h.
+// The two histograms must be created by equivalent HistogramOptions.
+func (h *Histogram) Merge(h2 *Histogram) {
+	if h.opts != h2.opts {
+		log.Fatalf("failed to merge histograms, created by inequivalent options")
+	}
+	h.Count += h2.Count
+	h.Sum += h2.Sum
+	h.SumOfSquares += h2.SumOfSquares
+	if h2.Min < h.Min {
+		h.Min = h2.Min
+	}
+	if h2.Max > h.Max {
+		h.Max = h2.Max
+	}
+	for i, b := range h2.Buckets {
+		h.Buckets[i].Count += b.Count
+	}
 }
diff --git a/go/src/google.golang.org/grpc/benchmark/stats/stats.go b/go/src/google.golang.org/grpc/benchmark/stats/stats.go
index 4290ad7..e0edb17 100644
--- a/go/src/google.golang.org/grpc/benchmark/stats/stats.go
+++ b/go/src/google.golang.org/grpc/benchmark/stats/stats.go
@@ -84,10 +84,10 @@
 	}
 	stats.histogram = NewHistogram(HistogramOptions{
 		NumBuckets: numBuckets,
-		// max(i.e., Nth lower bound) = min + (1 + growthFactor)^(numBuckets-2).
-		GrowthFactor:       math.Pow(float64(stats.max-stats.min), 1/float64(stats.numBuckets-2)) - 1,
-		SmallestBucketSize: 1.0,
-		MinValue:           stats.min})
+		// max-min(lower bound of last bucket) = (1 + growthFactor)^(numBuckets-2) * baseBucketSize.
+		GrowthFactor:   math.Pow(float64(stats.max-stats.min), 1/float64(numBuckets-2)) - 1,
+		BaseBucketSize: 1.0,
+		MinValue:       stats.min})
 
 	for _, d := range stats.durations {
 		stats.histogram.Add(int64(d / stats.unit))
@@ -104,7 +104,7 @@
 		fmt.Fprint(w, "Histogram (empty)\n")
 	} else {
 		fmt.Fprintf(w, "Histogram (unit: %s)\n", fmt.Sprintf("%v", stats.unit)[1:])
-		stats.histogram.Value().Print(w)
+		stats.histogram.Print(w)
 	}
 }
 
diff --git a/go/src/google.golang.org/grpc/benchmark/stats/timeseries.go b/go/src/google.golang.org/grpc/benchmark/stats/timeseries.go
deleted file mode 100644
index 2ba18a4..0000000
--- a/go/src/google.golang.org/grpc/benchmark/stats/timeseries.go
+++ /dev/null
@@ -1,154 +0,0 @@
-package stats
-
-import (
-	"math"
-	"time"
-)
-
-// timeseries holds the history of a changing value over a predefined period of
-// time.
-type timeseries struct {
-	size       int           // The number of time slots. Equivalent to len(slots).
-	resolution time.Duration // The time resolution of each slot.
-	stepCount  int64         // The number of intervals seen since creation.
-	head       int           // The position of the current time in slots.
-	time       time.Time     // The time at the beginning of the current time slot.
-	slots      []int64       // A circular buffer of time slots.
-}
-
-// newTimeSeries returns a newly allocated timeseries that covers the requested
-// period with the given resolution.
-func newTimeSeries(initialTime time.Time, period, resolution time.Duration) *timeseries {
-	size := int(period.Nanoseconds()/resolution.Nanoseconds()) + 1
-	return &timeseries{
-		size:       size,
-		resolution: resolution,
-		stepCount:  1,
-		time:       initialTime,
-		slots:      make([]int64, size),
-	}
-}
-
-// advanceTimeWithFill moves the timeseries forward to time t and fills in any
-// slots that get skipped in the process with the given value. Values older than
-// the timeseries period are lost.
-func (ts *timeseries) advanceTimeWithFill(t time.Time, value int64) {
-	advanceTo := t.Truncate(ts.resolution)
-	if !advanceTo.After(ts.time) {
-		// This is shortcut for the most common case of a busy counter
-		// where updates come in many times per ts.resolution.
-		ts.time = advanceTo
-		return
-	}
-	steps := int(advanceTo.Sub(ts.time).Nanoseconds() / ts.resolution.Nanoseconds())
-	ts.stepCount += int64(steps)
-	if steps > ts.size {
-		steps = ts.size
-	}
-	for steps > 0 {
-		ts.head = (ts.head + 1) % ts.size
-		ts.slots[ts.head] = value
-		steps--
-	}
-	ts.time = advanceTo
-}
-
-// advanceTime moves the timeseries forward to time t and fills in any slots
-// that get skipped in the process with the head value. Values older than the
-// timeseries period are lost.
-func (ts *timeseries) advanceTime(t time.Time) {
-	ts.advanceTimeWithFill(t, ts.slots[ts.head])
-}
-
-// set sets the current value of the timeseries.
-func (ts *timeseries) set(value int64) {
-	ts.slots[ts.head] = value
-}
-
-// incr sets the current value of the timeseries.
-func (ts *timeseries) incr(delta int64) {
-	ts.slots[ts.head] += delta
-}
-
-// headValue returns the latest value from the timeseries.
-func (ts *timeseries) headValue() int64 {
-	return ts.slots[ts.head]
-}
-
-// headTime returns the time of the latest value from the timeseries.
-func (ts *timeseries) headTime() time.Time {
-	return ts.time
-}
-
-// tailValue returns the oldest value from the timeseries.
-func (ts *timeseries) tailValue() int64 {
-	if ts.stepCount < int64(ts.size) {
-		return 0
-	}
-	return ts.slots[(ts.head+1)%ts.size]
-}
-
-// tailTime returns the time of the oldest value from the timeseries.
-func (ts *timeseries) tailTime() time.Time {
-	size := int64(ts.size)
-	if ts.stepCount < size {
-		size = ts.stepCount
-	}
-	return ts.time.Add(-time.Duration(size-1) * ts.resolution)
-}
-
-// delta returns the difference between the newest and oldest values from the
-// timeseries.
-func (ts *timeseries) delta() int64 {
-	return ts.headValue() - ts.tailValue()
-}
-
-// rate returns the rate of change between the oldest and newest values from
-// the timeseries in units per second.
-func (ts *timeseries) rate() float64 {
-	deltaTime := ts.headTime().Sub(ts.tailTime()).Seconds()
-	if deltaTime == 0 {
-		return 0
-	}
-	return float64(ts.delta()) / deltaTime
-}
-
-// min returns the smallest value from the timeseries.
-func (ts *timeseries) min() int64 {
-	to := ts.size
-	if ts.stepCount < int64(ts.size) {
-		to = ts.head + 1
-	}
-	tail := (ts.head + 1) % ts.size
-	min := int64(math.MaxInt64)
-	for b := 0; b < to; b++ {
-		if b != tail && ts.slots[b] < min {
-			min = ts.slots[b]
-		}
-	}
-	return min
-}
-
-// max returns the largest value from the timeseries.
-func (ts *timeseries) max() int64 {
-	to := ts.size
-	if ts.stepCount < int64(ts.size) {
-		to = ts.head + 1
-	}
-	tail := (ts.head + 1) % ts.size
-	max := int64(math.MinInt64)
-	for b := 0; b < to; b++ {
-		if b != tail && ts.slots[b] > max {
-			max = ts.slots[b]
-		}
-	}
-	return max
-}
-
-// reset resets the timeseries to an empty state.
-func (ts *timeseries) reset(t time.Time) {
-	ts.head = 0
-	ts.time = t
-	ts.stepCount = 1
-	ts.slots = make([]int64, ts.size)
-}
diff --git a/go/src/google.golang.org/grpc/benchmark/stats/tracker.go b/go/src/google.golang.org/grpc/benchmark/stats/tracker.go
deleted file mode 100644
index 802f729..0000000
--- a/go/src/google.golang.org/grpc/benchmark/stats/tracker.go
+++ /dev/null
@@ -1,159 +0,0 @@
-package stats
-
-import (
-	"math"
-	"sync"
-	"time"
-)
-
-// Tracker is a min/max value tracker that keeps track of its min/max values
-// over a given period of time, and with a given resolution. The initial min
-// and max values are math.MaxInt64 and math.MinInt64 respectively.
-type Tracker struct {
-	mu           sync.RWMutex
-	min, max     int64 // All time min/max.
-	minTS, maxTS [3]*timeseries
-	lastUpdate   time.Time
-}
-
-// newTracker returns a new Tracker.
-func newTracker() *Tracker {
-	now := TimeNow()
-	t := &Tracker{}
-	t.minTS[hour] = newTimeSeries(now, time.Hour, time.Minute)
-	t.minTS[tenminutes] = newTimeSeries(now, 10*time.Minute, 10*time.Second)
-	t.minTS[minute] = newTimeSeries(now, time.Minute, time.Second)
-	t.maxTS[hour] = newTimeSeries(now, time.Hour, time.Minute)
-	t.maxTS[tenminutes] = newTimeSeries(now, 10*time.Minute, 10*time.Second)
-	t.maxTS[minute] = newTimeSeries(now, time.Minute, time.Second)
-	t.init()
-	return t
-}
-
-func (t *Tracker) init() {
-	t.min = math.MaxInt64
-	t.max = math.MinInt64
-	for _, ts := range t.minTS {
-		ts.set(math.MaxInt64)
-	}
-	for _, ts := range t.maxTS {
-		ts.set(math.MinInt64)
-	}
-}
-
-func (t *Tracker) advance() time.Time {
-	now := TimeNow()
-	for _, ts := range t.minTS {
-		ts.advanceTimeWithFill(now, math.MaxInt64)
-	}
-	for _, ts := range t.maxTS {
-		ts.advanceTimeWithFill(now, math.MinInt64)
-	}
-	return now
-}
-
-// LastUpdate returns the last update time of the range.
-func (t *Tracker) LastUpdate() time.Time {
-	t.mu.RLock()
-	defer t.mu.RUnlock()
-	return t.lastUpdate
-}
-
-// Push adds a new value if it is a new minimum or maximum.
-func (t *Tracker) Push(value int64) {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.lastUpdate = t.advance()
-	if t.min > value {
-		t.min = value
-	}
-	if t.max < value {
-		t.max = value
-	}
-	for _, ts := range t.minTS {
-		if ts.headValue() > value {
-			ts.set(value)
-		}
-	}
-	for _, ts := range t.maxTS {
-		if ts.headValue() < value {
-			ts.set(value)
-		}
-	}
-}
-
-// Min returns the minimum value of the tracker
-func (t *Tracker) Min() int64 {
-	t.mu.RLock()
-	defer t.mu.RUnlock()
-	return t.min
-}
-
-// Max returns the maximum value of the tracker.
-func (t *Tracker) Max() int64 {
-	t.mu.RLock()
-	defer t.mu.RUnlock()
-	return t.max
-}
-
-// Min1h returns the minimum value for the last hour.
-func (t *Tracker) Min1h() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.minTS[hour].min()
-}
-
-// Max1h returns the maximum value for the last hour.
-func (t *Tracker) Max1h() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.maxTS[hour].max()
-}
-
-// Min10m returns the minimum value for the last 10 minutes.
-func (t *Tracker) Min10m() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.minTS[tenminutes].min()
-}
-
-// Max10m returns the maximum value for the last 10 minutes.
-func (t *Tracker) Max10m() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.maxTS[tenminutes].max()
-}
-
-// Min1m returns the minimum value for the last 1 minute.
-func (t *Tracker) Min1m() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.minTS[minute].min()
-}
-
-// Max1m returns the maximum value for the last 1 minute.
-func (t *Tracker) Max1m() int64 {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	t.advance()
-	return t.maxTS[minute].max()
-}
-
-// Reset resets the range to an empty state.
-func (t *Tracker) Reset() {
-	t.mu.Lock()
-	defer t.mu.Unlock()
-	now := TimeNow()
-	for _, ts := range t.minTS {
-		ts.reset(now)
-	}
-	for _, ts := range t.maxTS {
-		ts.reset(now)
-	}
-	t.init()
-}
diff --git a/go/src/google.golang.org/grpc/benchmark/worker/benchmark_client.go b/go/src/google.golang.org/grpc/benchmark/worker/benchmark_client.go
new file mode 100644
index 0000000..77e522f
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/worker/benchmark_client.go
@@ -0,0 +1,399 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package main
+
+import (
+	"math"
+	"runtime"
+	"sync"
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/benchmark"
+	testpb "google.golang.org/grpc/benchmark/grpc_testing"
+	"google.golang.org/grpc/benchmark/stats"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/credentials"
+	"google.golang.org/grpc/grpclog"
+)
+
+var (
+	caFile = "benchmark/server/testdata/ca.pem"
+)
+
+type lockingHistogram struct {
+	mu        sync.Mutex
+	histogram *stats.Histogram
+}
+
+func (h *lockingHistogram) add(value int64) {
+	h.mu.Lock()
+	defer h.mu.Unlock()
+	h.histogram.Add(value)
+}
+
+// swap sets h.histogram to new, and returns its old value.
+func (h *lockingHistogram) swap(new *stats.Histogram) *stats.Histogram {
+	h.mu.Lock()
+	defer h.mu.Unlock()
+	old := h.histogram
+	h.histogram = new
+	return old
+}
+
+func (h *lockingHistogram) mergeInto(merged *stats.Histogram) {
+	h.mu.Lock()
+	defer h.mu.Unlock()
+	merged.Merge(h.histogram)
+}
+
+type benchmarkClient struct {
+	closeConns        func()
+	stop              chan bool
+	lastResetTime     time.Time
+	histogramOptions  stats.HistogramOptions
+	lockingHistograms []lockingHistogram
+}
+
+func printClientConfig(config *testpb.ClientConfig) {
+	// Some config options are ignored:
+	// - client type:
+	//     will always create sync client
+	// - async client threads.
+	// - core list
+	grpclog.Printf(" * client type: %v (ignored, always creates sync client)", config.ClientType)
+	grpclog.Printf(" * async client threads: %v (ignored)", config.AsyncClientThreads)
+	// TODO: use cores specified by CoreList when setting list of cores is supported in go.
+	grpclog.Printf(" * core list: %v (ignored)", config.CoreList)
+
+	grpclog.Printf(" - security params: %v", config.SecurityParams)
+	grpclog.Printf(" - core limit: %v", config.CoreLimit)
+	grpclog.Printf(" - payload config: %v", config.PayloadConfig)
+	grpclog.Printf(" - rpcs per chann: %v", config.OutstandingRpcsPerChannel)
+	grpclog.Printf(" - channel number: %v", config.ClientChannels)
+	grpclog.Printf(" - load params: %v", config.LoadParams)
+	grpclog.Printf(" - rpc type: %v", config.RpcType)
+	grpclog.Printf(" - histogram params: %v", config.HistogramParams)
+	grpclog.Printf(" - server targets: %v", config.ServerTargets)
+}
+
+func setupClientEnv(config *testpb.ClientConfig) {
+	// Use all cpu cores available on machine by default.
+	// TODO: Revisit this for the optimal default setup.
+	if config.CoreLimit > 0 {
+		runtime.GOMAXPROCS(int(config.CoreLimit))
+	} else {
+		runtime.GOMAXPROCS(runtime.NumCPU())
+	}
+}
+
+// createConns creates connections according to given config.
+// It returns the connections and corresponding function to close them.
+// It returns non-nil error if there is anything wrong.
+func createConns(config *testpb.ClientConfig) ([]*grpc.ClientConn, func(), error) {
+	var opts []grpc.DialOption
+
+	// Sanity check for client type.
+	switch config.ClientType {
+	case testpb.ClientType_SYNC_CLIENT:
+	case testpb.ClientType_ASYNC_CLIENT:
+	default:
+		return nil, nil, grpc.Errorf(codes.InvalidArgument, "unknow client type: %v", config.ClientType)
+	}
+
+	// Check and set security options.
+	if config.SecurityParams != nil {
+		creds, err := credentials.NewClientTLSFromFile(abs(caFile), config.SecurityParams.ServerHostOverride)
+		if err != nil {
+			return nil, nil, grpc.Errorf(codes.InvalidArgument, "failed to create TLS credentials %v", err)
+		}
+		opts = append(opts, grpc.WithTransportCredentials(creds))
+	} else {
+		opts = append(opts, grpc.WithInsecure())
+	}
+
+	// Use byteBufCodec if it is required.
+	if config.PayloadConfig != nil {
+		switch config.PayloadConfig.Payload.(type) {
+		case *testpb.PayloadConfig_BytebufParams:
+			opts = append(opts, grpc.WithCodec(byteBufCodec{}))
+		case *testpb.PayloadConfig_SimpleParams:
+		default:
+			return nil, nil, grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig)
+		}
+	}
+
+	// Create connections.
+	connCount := int(config.ClientChannels)
+	conns := make([]*grpc.ClientConn, connCount, connCount)
+	for connIndex := 0; connIndex < connCount; connIndex++ {
+		conns[connIndex] = benchmark.NewClientConn(config.ServerTargets[connIndex%len(config.ServerTargets)], opts...)
+	}
+
+	return conns, func() {
+		for _, conn := range conns {
+			conn.Close()
+		}
+	}, nil
+}
+
+func performRPCs(config *testpb.ClientConfig, conns []*grpc.ClientConn, bc *benchmarkClient) error {
+	// Read payload size and type from config.
+	var (
+		payloadReqSize, payloadRespSize int
+		payloadType                     string
+	)
+	if config.PayloadConfig != nil {
+		switch c := config.PayloadConfig.Payload.(type) {
+		case *testpb.PayloadConfig_BytebufParams:
+			payloadReqSize = int(c.BytebufParams.ReqSize)
+			payloadRespSize = int(c.BytebufParams.RespSize)
+			payloadType = "bytebuf"
+		case *testpb.PayloadConfig_SimpleParams:
+			payloadReqSize = int(c.SimpleParams.ReqSize)
+			payloadRespSize = int(c.SimpleParams.RespSize)
+			payloadType = "protobuf"
+		default:
+			return grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig)
+		}
+	}
+
+	// TODO add open loop distribution.
+	switch config.LoadParams.Load.(type) {
+	case *testpb.LoadParams_ClosedLoop:
+	case *testpb.LoadParams_Poisson:
+		return grpc.Errorf(codes.Unimplemented, "unsupported load params: %v", config.LoadParams)
+	default:
+		return grpc.Errorf(codes.InvalidArgument, "unknown load params: %v", config.LoadParams)
+	}
+
+	rpcCountPerConn := int(config.OutstandingRpcsPerChannel)
+
+	switch config.RpcType {
+	case testpb.RpcType_UNARY:
+		bc.doCloseLoopUnary(conns, rpcCountPerConn, payloadReqSize, payloadRespSize)
+		// TODO open loop.
+	case testpb.RpcType_STREAMING:
+		bc.doCloseLoopStreaming(conns, rpcCountPerConn, payloadReqSize, payloadRespSize, payloadType)
+		// TODO open loop.
+	default:
+		return grpc.Errorf(codes.InvalidArgument, "unknown rpc type: %v", config.RpcType)
+	}
+
+	return nil
+}
+
+func startBenchmarkClient(config *testpb.ClientConfig) (*benchmarkClient, error) {
+	printClientConfig(config)
+
+	// Set running environment like how many cores to use.
+	setupClientEnv(config)
+
+	conns, closeConns, err := createConns(config)
+	if err != nil {
+		return nil, err
+	}
+
+	rpcCountPerConn := int(config.OutstandingRpcsPerChannel)
+	bc := &benchmarkClient{
+		histogramOptions: stats.HistogramOptions{
+			NumBuckets:     int(math.Log(config.HistogramParams.MaxPossible)/math.Log(1+config.HistogramParams.Resolution)) + 1,
+			GrowthFactor:   config.HistogramParams.Resolution,
+			BaseBucketSize: (1 + config.HistogramParams.Resolution),
+			MinValue:       0,
+		},
+		lockingHistograms: make([]lockingHistogram, rpcCountPerConn*len(conns), rpcCountPerConn*len(conns)),
+
+		stop:          make(chan bool),
+		lastResetTime: time.Now(),
+		closeConns:    closeConns,
+	}
+
+	if err = performRPCs(config, conns, bc); err != nil {
+		// Close all connections if performRPCs failed.
+		closeConns()
+		return nil, err
+	}
+
+	return bc, nil
+}
+
+func (bc *benchmarkClient) doCloseLoopUnary(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int) {
+	for ic, conn := range conns {
+		client := testpb.NewBenchmarkServiceClient(conn)
+		// For each connection, create rpcCountPerConn goroutines to do rpc.
+		for j := 0; j < rpcCountPerConn; j++ {
+			// Create histogram for each goroutine.
+			idx := ic*rpcCountPerConn + j
+			bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions)
+			// Start goroutine on the created mutex and histogram.
+			go func(idx int) {
+				// TODO: do warm up if necessary.
+				// Now relying on worker client to reserve time to do warm up.
+				// The worker client needs to wait for some time after client is created,
+				// before starting benchmark.
+				done := make(chan bool)
+				for {
+					go func() {
+						start := time.Now()
+						if err := benchmark.DoUnaryCall(client, reqSize, respSize); err != nil {
+							select {
+							case <-bc.stop:
+							case done <- false:
+							}
+							return
+						}
+						elapse := time.Since(start)
+						bc.lockingHistograms[idx].add(int64(elapse))
+						select {
+						case <-bc.stop:
+						case done <- true:
+						}
+					}()
+					select {
+					case <-bc.stop:
+						return
+					case <-done:
+					}
+				}
+			}(idx)
+		}
+	}
+}
+
+func (bc *benchmarkClient) doCloseLoopStreaming(conns []*grpc.ClientConn, rpcCountPerConn int, reqSize int, respSize int, payloadType string) {
+	var doRPC func(testpb.BenchmarkService_StreamingCallClient, int, int) error
+	if payloadType == "bytebuf" {
+		doRPC = benchmark.DoByteBufStreamingRoundTrip
+	} else {
+		doRPC = benchmark.DoStreamingRoundTrip
+	}
+	for ic, conn := range conns {
+		// For each connection, create rpcCountPerConn goroutines to do rpc.
+		for j := 0; j < rpcCountPerConn; j++ {
+			c := testpb.NewBenchmarkServiceClient(conn)
+			stream, err := c.StreamingCall(context.Background())
+			if err != nil {
+				grpclog.Fatalf("%v.StreamingCall(_) = _, %v", c, err)
+			}
+			// Create histogram for each goroutine.
+			idx := ic*rpcCountPerConn + j
+			bc.lockingHistograms[idx].histogram = stats.NewHistogram(bc.histogramOptions)
+			// Start goroutine on the created mutex and histogram.
+			go func(idx int) {
+				// TODO: do warm up if necessary.
+				// Now relying on worker client to reserve time to do warm up.
+				// The worker client needs to wait for some time after client is created,
+				// before starting benchmark.
+				done := make(chan bool)
+				for {
+					go func() {
+						start := time.Now()
+						if err := doRPC(stream, reqSize, respSize); err != nil {
+							select {
+							case <-bc.stop:
+							case done <- false:
+							}
+							return
+						}
+						elapse := time.Since(start)
+						bc.lockingHistograms[idx].add(int64(elapse))
+						select {
+						case <-bc.stop:
+						case done <- true:
+						}
+					}()
+					select {
+					case <-bc.stop:
+						return
+					case <-done:
+					}
+				}
+			}(idx)
+		}
+	}
+}
+
+// getStats returns the stats for benchmark client.
+// It resets lastResetTime and all histograms if argument reset is true.
+func (bc *benchmarkClient) getStats(reset bool) *testpb.ClientStats {
+	var timeElapsed float64
+	mergedHistogram := stats.NewHistogram(bc.histogramOptions)
+
+	if reset {
+		// Merging histogram may take some time.
+		// Put all histograms aside and merge later.
+		toMerge := make([]*stats.Histogram, len(bc.lockingHistograms), len(bc.lockingHistograms))
+		for i := range bc.lockingHistograms {
+			toMerge[i] = bc.lockingHistograms[i].swap(stats.NewHistogram(bc.histogramOptions))
+		}
+
+		for i := 0; i < len(toMerge); i++ {
+			mergedHistogram.Merge(toMerge[i])
+		}
+
+		timeElapsed = time.Since(bc.lastResetTime).Seconds()
+		bc.lastResetTime = time.Now()
+	} else {
+		// Merge only, not reset.
+		for i := range bc.lockingHistograms {
+			bc.lockingHistograms[i].mergeInto(mergedHistogram)
+		}
+		timeElapsed = time.Since(bc.lastResetTime).Seconds()
+	}
+
+	b := make([]uint32, len(mergedHistogram.Buckets), len(mergedHistogram.Buckets))
+	for i, v := range mergedHistogram.Buckets {
+		b[i] = uint32(v.Count)
+	}
+	return &testpb.ClientStats{
+		Latencies: &testpb.HistogramData{
+			Bucket:       b,
+			MinSeen:      float64(mergedHistogram.Min),
+			MaxSeen:      float64(mergedHistogram.Max),
+			Sum:          float64(mergedHistogram.Sum),
+			SumOfSquares: float64(mergedHistogram.SumOfSquares),
+			Count:        float64(mergedHistogram.Count),
+		},
+		TimeElapsed: timeElapsed,
+		TimeUser:    0,
+		TimeSystem:  0,
+	}
+}
+
+func (bc *benchmarkClient) shutdown() {
+	close(bc.stop)
+	bc.closeConns()
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/worker/benchmark_server.go b/go/src/google.golang.org/grpc/benchmark/worker/benchmark_server.go
new file mode 100644
index 0000000..667ef2c
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/worker/benchmark_server.go
@@ -0,0 +1,173 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package main
+
+import (
+	"runtime"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/benchmark"
+	testpb "google.golang.org/grpc/benchmark/grpc_testing"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/credentials"
+	"google.golang.org/grpc/grpclog"
+)
+
+var (
+	// File path related to google.golang.org/grpc.
+	certFile = "benchmark/server/testdata/server1.pem"
+	keyFile  = "benchmark/server/testdata/server1.key"
+)
+
+type benchmarkServer struct {
+	port          int
+	cores         int
+	closeFunc     func()
+	mu            sync.RWMutex
+	lastResetTime time.Time
+}
+
+func printServerConfig(config *testpb.ServerConfig) {
+	// Some config options are ignored:
+	// - server type:
+	//     will always start sync server
+	// - async server threads
+	// - core list
+	grpclog.Printf(" * server type: %v (ignored, always starts sync server)", config.ServerType)
+	grpclog.Printf(" * async server threads: %v (ignored)", config.AsyncServerThreads)
+	// TODO: use cores specified by CoreList when setting list of cores is supported in go.
+	grpclog.Printf(" * core list: %v (ignored)", config.CoreList)
+
+	grpclog.Printf(" - security params: %v", config.SecurityParams)
+	grpclog.Printf(" - core limit: %v", config.CoreLimit)
+	grpclog.Printf(" - port: %v", config.Port)
+	grpclog.Printf(" - payload config: %v", config.PayloadConfig)
+}
+
+func startBenchmarkServer(config *testpb.ServerConfig, serverPort int) (*benchmarkServer, error) {
+	printServerConfig(config)
+
+	// Use all cpu cores available on machine by default.
+	// TODO: Revisit this for the optimal default setup.
+	numOfCores := runtime.NumCPU()
+	if config.CoreLimit > 0 {
+		numOfCores = int(config.CoreLimit)
+	}
+	runtime.GOMAXPROCS(numOfCores)
+
+	var opts []grpc.ServerOption
+
+	// Sanity check for server type.
+	switch config.ServerType {
+	case testpb.ServerType_SYNC_SERVER:
+	case testpb.ServerType_ASYNC_SERVER:
+	case testpb.ServerType_ASYNC_GENERIC_SERVER:
+	default:
+		return nil, grpc.Errorf(codes.InvalidArgument, "unknow server type: %v", config.ServerType)
+	}
+
+	// Set security options.
+	if config.SecurityParams != nil {
+		creds, err := credentials.NewServerTLSFromFile(abs(certFile), abs(keyFile))
+		if err != nil {
+			grpclog.Fatalf("failed to generate credentials %v", err)
+		}
+		opts = append(opts, grpc.Creds(creds))
+	}
+
+	// Priority: config.Port > serverPort > default (0).
+	port := int(config.Port)
+	if port == 0 {
+		port = serverPort
+	}
+
+	// Create different benchmark server according to config.
+	var (
+		addr      string
+		closeFunc func()
+		err       error
+	)
+	if config.PayloadConfig != nil {
+		switch payload := config.PayloadConfig.Payload.(type) {
+		case *testpb.PayloadConfig_BytebufParams:
+			opts = append(opts, grpc.CustomCodec(byteBufCodec{}))
+			addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{
+				Addr:     ":" + strconv.Itoa(port),
+				Type:     "bytebuf",
+				Metadata: payload.BytebufParams.RespSize,
+			}, opts...)
+		case *testpb.PayloadConfig_SimpleParams:
+			addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{
+				Addr: ":" + strconv.Itoa(port),
+				Type: "protobuf",
+			}, opts...)
+		case *testpb.PayloadConfig_ComplexParams:
+			return nil, grpc.Errorf(codes.Unimplemented, "unsupported payload config: %v", config.PayloadConfig)
+		default:
+			return nil, grpc.Errorf(codes.InvalidArgument, "unknow payload config: %v", config.PayloadConfig)
+		}
+	} else {
+		// Start protobuf server if payload config is nil.
+		addr, closeFunc = benchmark.StartServer(benchmark.ServerInfo{
+			Addr: ":" + strconv.Itoa(port),
+			Type: "protobuf",
+		}, opts...)
+	}
+
+	grpclog.Printf("benchmark server listening at %v", addr)
+	addrSplitted := strings.Split(addr, ":")
+	p, err := strconv.Atoi(addrSplitted[len(addrSplitted)-1])
+	if err != nil {
+		grpclog.Fatalf("failed to get port number from server address: %v", err)
+	}
+
+	return &benchmarkServer{port: p, cores: numOfCores, closeFunc: closeFunc, lastResetTime: time.Now()}, nil
+}
+
+// getStats returns the stats for benchmark server.
+// It resets lastResetTime if argument reset is true.
+func (bs *benchmarkServer) getStats(reset bool) *testpb.ServerStats {
+	// TODO wall time, sys time, user time.
+	bs.mu.RLock()
+	defer bs.mu.RUnlock()
+	timeElapsed := time.Since(bs.lastResetTime).Seconds()
+	if reset {
+		bs.lastResetTime = time.Now()
+	}
+	return &testpb.ServerStats{TimeElapsed: timeElapsed, TimeUser: 0, TimeSystem: 0}
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/worker/main.go b/go/src/google.golang.org/grpc/benchmark/worker/main.go
new file mode 100644
index 0000000..c8815b0
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/worker/main.go
@@ -0,0 +1,235 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package main
+
+import (
+	"flag"
+	"fmt"
+	"io"
+	"net"
+	"runtime"
+	"strconv"
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	testpb "google.golang.org/grpc/benchmark/grpc_testing"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/grpclog"
+)
+
+var (
+	driverPort = flag.Int("driver_port", 10000, "port for communication with driver")
+	serverPort = flag.Int("server_port", 0, "port for benchmark server if not specified by server config message")
+)
+
+type byteBufCodec struct {
+}
+
+func (byteBufCodec) Marshal(v interface{}) ([]byte, error) {
+	b, ok := v.(*[]byte)
+	if !ok {
+		return nil, fmt.Errorf("failed to marshal: %v is not type of *[]byte")
+	}
+	return *b, nil
+}
+
+func (byteBufCodec) Unmarshal(data []byte, v interface{}) error {
+	b, ok := v.(*[]byte)
+	if !ok {
+		return fmt.Errorf("failed to marshal: %v is not type of *[]byte")
+	}
+	*b = data
+	return nil
+}
+
+func (byteBufCodec) String() string {
+	return "bytebuffer"
+}
+
+// workerServer implements WorkerService rpc handlers.
+// It can create benchmarkServer or benchmarkClient on demand.
+type workerServer struct {
+	stop       chan<- bool
+	serverPort int
+}
+
+func (s *workerServer) RunServer(stream testpb.WorkerService_RunServerServer) error {
+	var bs *benchmarkServer
+	defer func() {
+		// Close benchmark server when stream ends.
+		grpclog.Printf("closing benchmark server")
+		if bs != nil {
+			bs.closeFunc()
+		}
+	}()
+	for {
+		in, err := stream.Recv()
+		if err == io.EOF {
+			return nil
+		}
+		if err != nil {
+			return err
+		}
+
+		var out *testpb.ServerStatus
+		switch argtype := in.Argtype.(type) {
+		case *testpb.ServerArgs_Setup:
+			grpclog.Printf("server setup received:")
+			if bs != nil {
+				grpclog.Printf("server setup received when server already exists, closing the existing server")
+				bs.closeFunc()
+			}
+			bs, err = startBenchmarkServer(argtype.Setup, s.serverPort)
+			if err != nil {
+				return err
+			}
+			out = &testpb.ServerStatus{
+				Stats: bs.getStats(false),
+				Port:  int32(bs.port),
+				Cores: int32(bs.cores),
+			}
+
+		case *testpb.ServerArgs_Mark:
+			grpclog.Printf("server mark received:")
+			grpclog.Printf(" - %v", argtype)
+			if bs == nil {
+				return grpc.Errorf(codes.InvalidArgument, "server does not exist when mark received")
+			}
+			out = &testpb.ServerStatus{
+				Stats: bs.getStats(argtype.Mark.Reset_),
+				Port:  int32(bs.port),
+				Cores: int32(bs.cores),
+			}
+		}
+
+		if err := stream.Send(out); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func (s *workerServer) RunClient(stream testpb.WorkerService_RunClientServer) error {
+	var bc *benchmarkClient
+	defer func() {
+		// Shut down benchmark client when stream ends.
+		grpclog.Printf("shuting down benchmark client")
+		if bc != nil {
+			bc.shutdown()
+		}
+	}()
+	for {
+		in, err := stream.Recv()
+		if err == io.EOF {
+			return nil
+		}
+		if err != nil {
+			return err
+		}
+
+		var out *testpb.ClientStatus
+		switch t := in.Argtype.(type) {
+		case *testpb.ClientArgs_Setup:
+			grpclog.Printf("client setup received:")
+			if bc != nil {
+				grpclog.Printf("client setup received when client already exists, shuting down the existing client")
+				bc.shutdown()
+			}
+			bc, err = startBenchmarkClient(t.Setup)
+			if err != nil {
+				return err
+			}
+			out = &testpb.ClientStatus{
+				Stats: bc.getStats(false),
+			}
+
+		case *testpb.ClientArgs_Mark:
+			grpclog.Printf("client mark received:")
+			grpclog.Printf(" - %v", t)
+			if bc == nil {
+				return grpc.Errorf(codes.InvalidArgument, "client does not exist when mark received")
+			}
+			out = &testpb.ClientStatus{
+				Stats: bc.getStats(t.Mark.Reset_),
+			}
+		}
+
+		if err := stream.Send(out); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func (s *workerServer) CoreCount(ctx context.Context, in *testpb.CoreRequest) (*testpb.CoreResponse, error) {
+	grpclog.Printf("core count: %v", runtime.NumCPU())
+	return &testpb.CoreResponse{int32(runtime.NumCPU())}, nil
+}
+
+func (s *workerServer) QuitWorker(ctx context.Context, in *testpb.Void) (*testpb.Void, error) {
+	grpclog.Printf("quiting worker")
+	s.stop <- true
+	return &testpb.Void{}, nil
+}
+
+func main() {
+	grpc.EnableTracing = false
+
+	flag.Parse()
+	lis, err := net.Listen("tcp", ":"+strconv.Itoa(*driverPort))
+	if err != nil {
+		grpclog.Fatalf("failed to listen: %v", err)
+	}
+	grpclog.Printf("worker listening at port %v", *driverPort)
+
+	s := grpc.NewServer()
+	stop := make(chan bool)
+	testpb.RegisterWorkerServiceServer(s, &workerServer{
+		stop:       stop,
+		serverPort: *serverPort,
+	})
+
+	go func() {
+		<-stop
+		// Wait for 1 second before stopping the server to make sure the return value of QuitWorker is sent to client.
+		// TODO revise this once server graceful stop is supported in gRPC.
+		time.Sleep(time.Second)
+		s.Stop()
+	}()
+
+	s.Serve(lis)
+}
diff --git a/go/src/google.golang.org/grpc/benchmark/worker/util.go b/go/src/google.golang.org/grpc/benchmark/worker/util.go
new file mode 100644
index 0000000..f0016ce
--- /dev/null
+++ b/go/src/google.golang.org/grpc/benchmark/worker/util.go
@@ -0,0 +1,75 @@
+/*
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package main
+
+import (
+	"log"
+	"os"
+	"path/filepath"
+)
+
+// abs returns the absolute path the given relative file or directory path,
+// relative to the google.golang.org/grpc directory in the user's GOPATH.
+// If rel is already absolute, it is returned unmodified.
+func abs(rel string) string {
+	if filepath.IsAbs(rel) {
+		return rel
+	}
+	v, err := goPackagePath("google.golang.org/grpc")
+	if err != nil {
+		log.Fatalf("Error finding google.golang.org/grpc/testdata directory: %v", err)
+	}
+	return filepath.Join(v, rel)
+}
+
+func goPackagePath(pkg string) (path string, err error) {
+	gp := os.Getenv("GOPATH")
+	if gp == "" {
+		return path, os.ErrNotExist
+	}
+	for _, p := range filepath.SplitList(gp) {
+		dir := filepath.Join(p, "src", filepath.FromSlash(pkg))
+		fi, err := os.Stat(dir)
+		if os.IsNotExist(err) {
+			continue
+		}
+		if err != nil {
+			return "", err
+		}
+		if !fi.IsDir() {
+			continue
+		}
+		return dir, nil
+	}
+	return path, os.ErrNotExist
+}
diff --git a/go/src/google.golang.org/grpc/call.go b/go/src/google.golang.org/grpc/call.go
index 9d0fc8e..d6d993b 100644
--- a/go/src/google.golang.org/grpc/call.go
+++ b/go/src/google.golang.org/grpc/call.go
@@ -132,19 +132,16 @@
 		Last:  true,
 		Delay: false,
 	}
-	var (
-		lastErr error // record the error that happened
-	)
 	for {
 		var (
 			err    error
 			t      transport.ClientTransport
 			stream *transport.Stream
+			// Record the put handler from Balancer.Get(...). It is called once the
+			// RPC has completed or failed.
+			put func()
 		)
-		// TODO(zhaoq): Need a formal spec of retry strategy for non-failfast rpcs.
-		if lastErr != nil && c.failFast {
-			return toRPCErr(lastErr)
-		}
+		// TODO(zhaoq): Need a formal spec of fail-fast.
 		callHdr := &transport.CallHdr{
 			Host:   cc.authority,
 			Method: method,
@@ -152,39 +149,66 @@
 		if cc.dopts.cp != nil {
 			callHdr.SendCompress = cc.dopts.cp.Type()
 		}
-		t, err = cc.dopts.picker.Pick(ctx)
+		gopts := BalancerGetOptions{
+			BlockingWait: !c.failFast,
+		}
+		t, put, err = cc.getTransport(ctx, gopts)
 		if err != nil {
-			if lastErr != nil {
-				// This was a retry; return the error from the last attempt.
-				return toRPCErr(lastErr)
+			// TODO(zhaoq): Probably revisit the error handling.
+			if err == ErrClientConnClosing {
+				return Errorf(codes.FailedPrecondition, "%v", err)
 			}
-			return toRPCErr(err)
+			if _, ok := err.(transport.StreamError); ok {
+				return toRPCErr(err)
+			}
+			if _, ok := err.(transport.ConnectionError); ok {
+				if c.failFast {
+					return toRPCErr(err)
+				}
+			}
+			// All the remaining cases are treated as retryable.
+			continue
 		}
 		if c.traceInfo.tr != nil {
 			c.traceInfo.tr.LazyLog(&payload{sent: true, msg: args}, true)
 		}
 		stream, err = sendRequest(ctx, cc.dopts.codec, cc.dopts.cp, callHdr, t, args, topts)
 		if err != nil {
-			if _, ok := err.(transport.ConnectionError); ok {
-				lastErr = err
-				continue
+			if put != nil {
+				put()
+				put = nil
 			}
-			if lastErr != nil {
-				return toRPCErr(lastErr)
+			if _, ok := err.(transport.ConnectionError); ok {
+				if c.failFast {
+					return toRPCErr(err)
+				}
+				continue
 			}
 			return toRPCErr(err)
 		}
 		// Receive the response
-		lastErr = recvResponse(cc.dopts, t, &c, stream, reply)
-		if _, ok := lastErr.(transport.ConnectionError); ok {
-			continue
+		err = recvResponse(cc.dopts, t, &c, stream, reply)
+		if err != nil {
+			if put != nil {
+				put()
+				put = nil
+			}
+			if _, ok := err.(transport.ConnectionError); ok {
+				if c.failFast {
+					return toRPCErr(err)
+				}
+				continue
+			}
+			t.CloseStream(stream, err)
+			return toRPCErr(err)
 		}
 		if c.traceInfo.tr != nil {
 			c.traceInfo.tr.LazyLog(&payload{sent: false, msg: reply}, true)
 		}
-		t.CloseStream(stream, lastErr)
-		if lastErr != nil {
-			return toRPCErr(lastErr)
+		t.CloseStream(stream, nil)
+		if put != nil {
+			put()
+			put = nil
 		}
 		return Errorf(stream.StatusCode(), "%s", stream.StatusDesc())
 	}
diff --git a/go/src/google.golang.org/grpc/call_test.go b/go/src/google.golang.org/grpc/call_test.go
index feeeb7e..380bf87 100644
--- a/go/src/google.golang.org/grpc/call_test.go
+++ b/go/src/google.golang.org/grpc/call_test.go
@@ -54,6 +54,7 @@
 	expectedResponse = "pong"
 	weirdError       = "format verbs: %v%s"
 	sizeLargeErr     = 1024 * 1024
+	canceled         = 0
 )
 
 type testCodec struct {
@@ -73,7 +74,8 @@
 }
 
 type testStreamHandler struct {
-	t transport.ServerTransport
+	port string
+	t    transport.ServerTransport
 }
 
 func (h *testStreamHandler) handleStream(t *testing.T, s *transport.Stream) {
@@ -100,6 +102,16 @@
 			h.t.WriteStatus(s, codes.Internal, weirdError)
 			return
 		}
+		if v == "canceled" {
+			canceled++
+			h.t.WriteStatus(s, codes.Internal, "")
+			return
+		}
+		if v == "port" {
+			h.t.WriteStatus(s, codes.Internal, h.port)
+			return
+		}
+
 		if v != expectedRequest {
 			h.t.WriteStatus(s, codes.Internal, strings.Repeat("A", sizeLargeErr))
 			return
@@ -154,7 +166,7 @@
 		}
 		st, err := transport.NewServerTransport("http2", conn, maxStreams, nil)
 		if err != nil {
-			return
+			continue
 		}
 		s.mu.Lock()
 		if s.conns == nil {
@@ -164,7 +176,10 @@
 		}
 		s.conns[st] = true
 		s.mu.Unlock()
-		h := &testStreamHandler{st}
+		h := &testStreamHandler{
+			port: s.port,
+			t:    st,
+		}
 		go st.HandleStreams(func(s *transport.Stream) {
 			go h.handleStream(t, s)
 		})
@@ -244,3 +259,20 @@
 	cc.Close()
 	server.stop()
 }
+
+// TestInvokeCancel checks that an Invoke with a canceled context is not sent.
+func TestInvokeCancel(t *testing.T) {
+	server, cc := setUp(t, 0, math.MaxUint32)
+	var reply string
+	req := "canceled"
+	for i := 0; i < 100; i++ {
+		ctx, cancel := context.WithCancel(context.Background())
+		cancel()
+		Invoke(ctx, "/foo/bar", &req, &reply, cc)
+	}
+	if canceled != 0 {
+		t.Fatalf("received %d of 100 canceled requests", canceled)
+	}
+	cc.Close()
+	server.stop()
+}
diff --git a/go/src/google.golang.org/grpc/clientconn.go b/go/src/google.golang.org/grpc/clientconn.go
index fc0b803..885eedd 100644
--- a/go/src/google.golang.org/grpc/clientconn.go
+++ b/go/src/google.golang.org/grpc/clientconn.go
@@ -43,28 +43,35 @@
 
 	"golang.org/x/net/context"
 	"golang.org/x/net/trace"
+	"google.golang.org/grpc/codes"
 	"google.golang.org/grpc/credentials"
 	"google.golang.org/grpc/grpclog"
 	"google.golang.org/grpc/transport"
 )
 
 var (
-	// ErrUnspecTarget indicates that the target address is unspecified.
-	ErrUnspecTarget = errors.New("grpc: target is unspecified")
-	// ErrNoTransportSecurity indicates that there is no transport security
+	// ErrClientConnClosing indicates that the operation is illegal because
+	// the ClientConn is closing.
+	ErrClientConnClosing = errors.New("grpc: the client connection is closing")
+	// ErrClientConnTimeout indicates that the ClientConn cannot establish the
+	// underlying connections within the specified timeout.
+	ErrClientConnTimeout = errors.New("grpc: timed out when dialing")
+
+	// errNoTransportSecurity indicates that there is no transport security
 	// being set for ClientConn. Users should either set one or explicitly
 	// call WithInsecure DialOption to disable security.
-	ErrNoTransportSecurity = errors.New("grpc: no transport security set (use grpc.WithInsecure() explicitly or set credentials)")
-	// ErrCredentialsMisuse indicates that users want to transmit security information
+	errNoTransportSecurity = errors.New("grpc: no transport security set (use grpc.WithInsecure() explicitly or set credentials)")
+	// errCredentialsMisuse indicates that users want to transmit security information
 	// (e.g., oauth2 token) which requires secure connection on an insecure
 	// connection.
-	ErrCredentialsMisuse = errors.New("grpc: the credentials require transport level security (use grpc.WithTransportAuthenticator() to set)")
-	// ErrClientConnClosing indicates that the operation is illegal because
-	// the session is closing.
-	ErrClientConnClosing = errors.New("grpc: the client connection is closing")
-	// ErrClientConnTimeout indicates that the connection could not be
-	// established or re-established within the specified timeout.
-	ErrClientConnTimeout = errors.New("grpc: timed out trying to connect")
+	errCredentialsMisuse = errors.New("grpc: the credentials require transport level security (use grpc.WithTransportAuthenticator() to set)")
+	// errNetworkIP indicates that the connection is down due to some network I/O error.
+	errNetworkIO = errors.New("grpc: failed with network I/O error")
+	// errConnDrain indicates that the connection starts to be drained and does not accept any new RPCs.
+	errConnDrain = errors.New("grpc: the connection is drained")
+	// errConnClosing indicates that the connection is closing.
+	errConnClosing = errors.New("grpc: the connection is closing")
+	errNoAddr      = errors.New("grpc: there is no address available to dial")
 	// minimum time to give a connection to complete
 	minConnectTimeout = 20 * time.Second
 )
@@ -75,9 +82,11 @@
 	codec    Codec
 	cp       Compressor
 	dc       Decompressor
-	picker   Picker
+	bs       backoffStrategy
+	balancer Balancer
 	block    bool
 	insecure bool
+	timeout  time.Duration
 	copts    transport.ConnectOptions
 }
 
@@ -107,10 +116,38 @@
 	}
 }
 
-// WithPicker returns a DialOption which sets a picker for connection selection.
-func WithPicker(p Picker) DialOption {
+// WithBalancer returns a DialOption which sets a load balancer.
+func WithBalancer(b Balancer) DialOption {
 	return func(o *dialOptions) {
-		o.picker = p
+		o.balancer = b
+	}
+}
+
+// WithBackoffMaxDelay configures the dialer to use the provided maximum delay
+// when backing off after failed connection attempts.
+func WithBackoffMaxDelay(md time.Duration) DialOption {
+	return WithBackoffConfig(BackoffConfig{MaxDelay: md})
+}
+
+// WithBackoffConfig configures the dialer to use the provided backoff
+// parameters after connection failures.
+//
+// Use WithBackoffMaxDelay until more parameters on BackoffConfig are opened up
+// for use.
+func WithBackoffConfig(b BackoffConfig) DialOption {
+	// Set defaults to ensure that provided BackoffConfig is valid and
+	// unexported fields get default values.
+	setDefaults(&b)
+	return withBackoff(b)
+}
+
+// withBackoff sets the backoff strategy used for retries after a
+// failed connection attempt.
+//
+// This can be exported if arbitrary backoff strategies are allowed by gRPC.
+func withBackoff(bs backoffStrategy) DialOption {
+	return func(o *dialOptions) {
+		o.bs = bs
 	}
 }
 
@@ -147,10 +184,11 @@
 	}
 }
 
-// WithTimeout returns a DialOption that configures a timeout for dialing a client connection.
+// WithTimeout returns a DialOption that configures a timeout for dialing a ClientConn
+// initially. This is valid if and only if WithBlock() is present.
 func WithTimeout(d time.Duration) DialOption {
 	return func(o *dialOptions) {
-		o.copts.Timeout = d
+		o.timeout = d
 	}
 }
 
@@ -172,6 +210,7 @@
 func Dial(target string, opts ...DialOption) (*ClientConn, error) {
 	cc := &ClientConn{
 		target: target,
+		conns:  make(map[Address]*addrConn),
 	}
 	for _, opt := range opts {
 		opt(&cc.dopts)
@@ -180,13 +219,58 @@
 		// Set the default codec.
 		cc.dopts.codec = protoCodec{}
 	}
-	if cc.dopts.picker == nil {
-		cc.dopts.picker = &unicastPicker{
-			target: target,
+
+	if cc.dopts.bs == nil {
+		cc.dopts.bs = DefaultBackoffConfig
+	}
+
+	cc.balancer = cc.dopts.balancer
+	if cc.balancer == nil {
+		cc.balancer = RoundRobin(nil)
+	}
+	if err := cc.balancer.Start(target); err != nil {
+		return nil, err
+	}
+	var (
+		ok    bool
+		addrs []Address
+	)
+	ch := cc.balancer.Notify()
+	if ch == nil {
+		// There is no name resolver installed.
+		addrs = append(addrs, Address{Addr: target})
+	} else {
+		addrs, ok = <-ch
+		if !ok || len(addrs) == 0 {
+			return nil, errNoAddr
 		}
 	}
-	if err := cc.dopts.picker.Init(cc); err != nil {
-		return nil, err
+	waitC := make(chan error, 1)
+	go func() {
+		for _, a := range addrs {
+			if err := cc.newAddrConn(a, false); err != nil {
+				waitC <- err
+				return
+			}
+		}
+		close(waitC)
+	}()
+	var timeoutCh <-chan time.Time
+	if cc.dopts.timeout > 0 {
+		timeoutCh = time.After(cc.dopts.timeout)
+	}
+	select {
+	case err := <-waitC:
+		if err != nil {
+			cc.Close()
+			return nil, err
+		}
+	case <-timeoutCh:
+		cc.Close()
+		return nil, ErrClientConnTimeout
+	}
+	if ok {
+		go cc.lbWatcher()
 	}
 	colonPos := strings.LastIndex(target, ":")
 	if colonPos == -1 {
@@ -229,325 +313,361 @@
 	}
 }
 
-// ClientConn represents a client connection to an RPC service.
+// ClientConn represents a client connection to an RPC server.
 type ClientConn struct {
 	target    string
+	balancer  Balancer
 	authority string
 	dopts     dialOptions
+
+	mu    sync.RWMutex
+	conns map[Address]*addrConn
 }
 
-// State returns the connectivity state of cc.
-// This is EXPERIMENTAL API.
-func (cc *ClientConn) State() (ConnectivityState, error) {
-	return cc.dopts.picker.State()
+func (cc *ClientConn) lbWatcher() {
+	for addrs := range cc.balancer.Notify() {
+		var (
+			add []Address   // Addresses need to setup connections.
+			del []*addrConn // Connections need to tear down.
+		)
+		cc.mu.Lock()
+		for _, a := range addrs {
+			if _, ok := cc.conns[a]; !ok {
+				add = append(add, a)
+			}
+		}
+		for k, c := range cc.conns {
+			var keep bool
+			for _, a := range addrs {
+				if k == a {
+					keep = true
+					break
+				}
+			}
+			if !keep {
+				del = append(del, c)
+			}
+		}
+		cc.mu.Unlock()
+		for _, a := range add {
+			cc.newAddrConn(a, true)
+		}
+		for _, c := range del {
+			c.tearDown(errConnDrain)
+		}
+	}
 }
 
-// WaitForStateChange blocks until the state changes to something other than the sourceState.
-// It returns the new state or error.
-// This is EXPERIMENTAL API.
-func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
-	return cc.dopts.picker.WaitForStateChange(ctx, sourceState)
+func (cc *ClientConn) newAddrConn(addr Address, skipWait bool) error {
+	ac := &addrConn{
+		cc:           cc,
+		addr:         addr,
+		dopts:        cc.dopts,
+		shutdownChan: make(chan struct{}),
+	}
+	if EnableTracing {
+		ac.events = trace.NewEventLog("grpc.ClientConn", ac.addr.Addr)
+	}
+	if !ac.dopts.insecure {
+		var ok bool
+		for _, cd := range ac.dopts.copts.AuthOptions {
+			if _, ok = cd.(credentials.TransportAuthenticator); ok {
+				break
+			}
+		}
+		if !ok {
+			return errNoTransportSecurity
+		}
+	} else {
+		for _, cd := range ac.dopts.copts.AuthOptions {
+			if cd.RequireTransportSecurity() {
+				return errCredentialsMisuse
+			}
+		}
+	}
+	// Insert ac into ac.cc.conns. This needs to be done before any getTransport(...) is called.
+	ac.cc.mu.Lock()
+	if ac.cc.conns == nil {
+		ac.cc.mu.Unlock()
+		return ErrClientConnClosing
+	}
+	stale := ac.cc.conns[ac.addr]
+	ac.cc.conns[ac.addr] = ac
+	ac.cc.mu.Unlock()
+	if stale != nil {
+		// There is an addrConn alive on ac.addr already. This could be due to
+		// i) stale's Close is undergoing;
+		// ii) a buggy Balancer notifies duplicated Addresses.
+		stale.tearDown(errConnDrain)
+	}
+	ac.stateCV = sync.NewCond(&ac.mu)
+	// skipWait may overwrite the decision in ac.dopts.block.
+	if ac.dopts.block && !skipWait {
+		if err := ac.resetTransport(false); err != nil {
+			ac.tearDown(err)
+			return err
+		}
+		// Start to monitor the error status of transport.
+		go ac.transportMonitor()
+	} else {
+		// Start a goroutine connecting to the server asynchronously.
+		go func() {
+			if err := ac.resetTransport(false); err != nil {
+				grpclog.Printf("Failed to dial %s: %v; please retry.", ac.addr.Addr, err)
+				ac.tearDown(err)
+				return
+			}
+			ac.transportMonitor()
+		}()
+	}
+	return nil
 }
 
-// Close starts to tear down the ClientConn.
+func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions) (transport.ClientTransport, func(), error) {
+	// TODO(zhaoq): Implement fail-fast logic.
+	addr, put, err := cc.balancer.Get(ctx, opts)
+	if err != nil {
+		return nil, nil, err
+	}
+	cc.mu.RLock()
+	if cc.conns == nil {
+		cc.mu.RUnlock()
+		return nil, nil, ErrClientConnClosing
+	}
+	ac, ok := cc.conns[addr]
+	cc.mu.RUnlock()
+	if !ok {
+		if put != nil {
+			put()
+		}
+		return nil, nil, transport.StreamErrorf(codes.Internal, "grpc: failed to find the transport to send the rpc")
+	}
+	t, err := ac.wait(ctx)
+	if err != nil {
+		if put != nil {
+			put()
+		}
+		return nil, nil, err
+	}
+	return t, put, nil
+}
+
+// Close tears down the ClientConn and all underlying connections.
 func (cc *ClientConn) Close() error {
-	return cc.dopts.picker.Close()
+	cc.mu.Lock()
+	if cc.conns == nil {
+		cc.mu.Unlock()
+		return ErrClientConnClosing
+	}
+	conns := cc.conns
+	cc.conns = nil
+	cc.mu.Unlock()
+	cc.balancer.Close()
+	for _, ac := range conns {
+		ac.tearDown(ErrClientConnClosing)
+	}
+	return nil
 }
 
-// Conn is a client connection to a single destination.
-type Conn struct {
-	target       string
+// addrConn is a network connection to a given address.
+type addrConn struct {
+	cc           *ClientConn
+	addr         Address
 	dopts        dialOptions
-	resetChan    chan int
 	shutdownChan chan struct{}
 	events       trace.EventLog
 
 	mu      sync.Mutex
 	state   ConnectivityState
 	stateCV *sync.Cond
+	down    func(error) // the handler called when a connection is down.
 	// ready is closed and becomes nil when a new transport is up or failed
 	// due to timeout.
 	ready     chan struct{}
 	transport transport.ClientTransport
 }
 
-// NewConn creates a Conn.
-func NewConn(cc *ClientConn) (*Conn, error) {
-	if cc.target == "" {
-		return nil, ErrUnspecTarget
-	}
-	c := &Conn{
-		target:       cc.target,
-		dopts:        cc.dopts,
-		resetChan:    make(chan int, 1),
-		shutdownChan: make(chan struct{}),
-	}
-	if EnableTracing {
-		c.events = trace.NewEventLog("grpc.ClientConn", c.target)
-	}
-	if !c.dopts.insecure {
-		var ok bool
-		for _, cd := range c.dopts.copts.AuthOptions {
-			if _, ok = cd.(credentials.TransportAuthenticator); ok {
-				break
-			}
-		}
-		if !ok {
-			return nil, ErrNoTransportSecurity
-		}
-	} else {
-		for _, cd := range c.dopts.copts.AuthOptions {
-			if cd.RequireTransportSecurity() {
-				return nil, ErrCredentialsMisuse
-			}
-		}
-	}
-	c.stateCV = sync.NewCond(&c.mu)
-	if c.dopts.block {
-		if err := c.resetTransport(false); err != nil {
-			c.Close()
-			return nil, err
-		}
-		// Start to monitor the error status of transport.
-		go c.transportMonitor()
-	} else {
-		// Start a goroutine connecting to the server asynchronously.
-		go func() {
-			if err := c.resetTransport(false); err != nil {
-				grpclog.Printf("Failed to dial %s: %v; please retry.", c.target, err)
-				c.Close()
-				return
-			}
-			c.transportMonitor()
-		}()
-	}
-	return c, nil
-}
-
-// printf records an event in cc's event log, unless cc has been closed.
-// REQUIRES cc.mu is held.
-func (cc *Conn) printf(format string, a ...interface{}) {
-	if cc.events != nil {
-		cc.events.Printf(format, a...)
+// printf records an event in ac's event log, unless ac has been closed.
+// REQUIRES ac.mu is held.
+func (ac *addrConn) printf(format string, a ...interface{}) {
+	if ac.events != nil {
+		ac.events.Printf(format, a...)
 	}
 }
 
-// errorf records an error in cc's event log, unless cc has been closed.
-// REQUIRES cc.mu is held.
-func (cc *Conn) errorf(format string, a ...interface{}) {
-	if cc.events != nil {
-		cc.events.Errorf(format, a...)
+// errorf records an error in ac's event log, unless ac has been closed.
+// REQUIRES ac.mu is held.
+func (ac *addrConn) errorf(format string, a ...interface{}) {
+	if ac.events != nil {
+		ac.events.Errorf(format, a...)
 	}
 }
 
-// State returns the connectivity state of the Conn
-func (cc *Conn) State() ConnectivityState {
-	cc.mu.Lock()
-	defer cc.mu.Unlock()
-	return cc.state
+// getState returns the connectivity state of the Conn
+func (ac *addrConn) getState() ConnectivityState {
+	ac.mu.Lock()
+	defer ac.mu.Unlock()
+	return ac.state
 }
 
-// WaitForStateChange blocks until the state changes to something other than the sourceState.
-func (cc *Conn) WaitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
-	cc.mu.Lock()
-	defer cc.mu.Unlock()
-	if sourceState != cc.state {
-		return cc.state, nil
+// waitForStateChange blocks until the state changes to something other than the sourceState.
+func (ac *addrConn) waitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
+	ac.mu.Lock()
+	defer ac.mu.Unlock()
+	if sourceState != ac.state {
+		return ac.state, nil
 	}
 	done := make(chan struct{})
 	var err error
 	go func() {
 		select {
 		case <-ctx.Done():
-			cc.mu.Lock()
+			ac.mu.Lock()
 			err = ctx.Err()
-			cc.stateCV.Broadcast()
-			cc.mu.Unlock()
+			ac.stateCV.Broadcast()
+			ac.mu.Unlock()
 		case <-done:
 		}
 	}()
 	defer close(done)
-	for sourceState == cc.state {
-		cc.stateCV.Wait()
+	for sourceState == ac.state {
+		ac.stateCV.Wait()
 		if err != nil {
-			return cc.state, err
+			return ac.state, err
 		}
 	}
-	return cc.state, nil
+	return ac.state, nil
 }
 
-// NotifyReset tries to signal the underlying transport needs to be reset due to
-// for example a name resolution change in flight.
-func (cc *Conn) NotifyReset() {
-	select {
-	case cc.resetChan <- 0:
-	default:
-	}
-}
-
-func (cc *Conn) resetTransport(closeTransport bool) error {
+func (ac *addrConn) resetTransport(closeTransport bool) error {
 	var retries int
-	start := time.Now()
 	for {
-		cc.mu.Lock()
-		cc.printf("connecting")
-		if cc.state == Shutdown {
-			// cc.Close() has been invoked.
-			cc.mu.Unlock()
-			return ErrClientConnClosing
+		ac.mu.Lock()
+		ac.printf("connecting")
+		if ac.state == Shutdown {
+			// ac.tearDown(...) has been invoked.
+			ac.mu.Unlock()
+			return errConnClosing
 		}
-		cc.state = Connecting
-		cc.stateCV.Broadcast()
-		cc.mu.Unlock()
-		if closeTransport {
-			cc.transport.Close()
+		if ac.down != nil {
+			ac.down(downErrorf(false, true, "%v", errNetworkIO))
+			ac.down = nil
 		}
-		// Adjust timeout for the current try.
-		copts := cc.dopts.copts
-		if copts.Timeout < 0 {
-			cc.Close()
-			return ErrClientConnTimeout
+		ac.state = Connecting
+		ac.stateCV.Broadcast()
+		t := ac.transport
+		ac.mu.Unlock()
+		if closeTransport && t != nil {
+			t.Close()
 		}
-		if copts.Timeout > 0 {
-			copts.Timeout -= time.Since(start)
-			if copts.Timeout <= 0 {
-				cc.Close()
-				return ErrClientConnTimeout
-			}
-		}
-		sleepTime := backoff(retries)
-		timeout := sleepTime
-		if timeout < minConnectTimeout {
-			timeout = minConnectTimeout
-		}
-		if copts.Timeout == 0 || copts.Timeout > timeout {
-			copts.Timeout = timeout
+		sleepTime := ac.dopts.bs.backoff(retries)
+		ac.dopts.copts.Timeout = sleepTime
+		if sleepTime < minConnectTimeout {
+			ac.dopts.copts.Timeout = minConnectTimeout
 		}
 		connectTime := time.Now()
-		addr, err := cc.dopts.picker.PickAddr()
-		var newTransport transport.ClientTransport
-		if err == nil {
-			newTransport, err = transport.NewClientTransport(addr, &copts)
-		}
+		newTransport, err := transport.NewClientTransport(ac.addr.Addr, &ac.dopts.copts)
 		if err != nil {
-			cc.mu.Lock()
-			if cc.state == Shutdown {
-				// cc.Close() has been invoked.
-				cc.mu.Unlock()
-				return ErrClientConnClosing
+			ac.mu.Lock()
+			if ac.state == Shutdown {
+				// ac.tearDown(...) has been invoked.
+				ac.mu.Unlock()
+				return errConnClosing
 			}
-			cc.errorf("transient failure: %v", err)
-			cc.state = TransientFailure
-			cc.stateCV.Broadcast()
-			if cc.ready != nil {
-				close(cc.ready)
-				cc.ready = nil
+			ac.errorf("transient failure: %v", err)
+			ac.state = TransientFailure
+			ac.stateCV.Broadcast()
+			if ac.ready != nil {
+				close(ac.ready)
+				ac.ready = nil
 			}
-			cc.mu.Unlock()
+			ac.mu.Unlock()
 			sleepTime -= time.Since(connectTime)
 			if sleepTime < 0 {
 				sleepTime = 0
 			}
-			// Fail early before falling into sleep.
-			if cc.dopts.copts.Timeout > 0 && cc.dopts.copts.Timeout < sleepTime+time.Since(start) {
-				cc.mu.Lock()
-				cc.errorf("connection timeout")
-				cc.mu.Unlock()
-				cc.Close()
-				return ErrClientConnTimeout
-			}
 			closeTransport = false
-			time.Sleep(sleepTime)
+			select {
+			case <-time.After(sleepTime):
+			case <-ac.shutdownChan:
+			}
 			retries++
-			grpclog.Printf("grpc: Conn.resetTransport failed to create client transport: %v; Reconnecting to %q", err, cc.target)
+			grpclog.Printf("grpc: addrConn.resetTransport failed to create client transport: %v; Reconnecting to %q", err, ac.addr)
 			continue
 		}
-		cc.mu.Lock()
-		cc.printf("ready")
-		if cc.state == Shutdown {
-			// cc.Close() has been invoked.
-			cc.mu.Unlock()
+		ac.mu.Lock()
+		ac.printf("ready")
+		if ac.state == Shutdown {
+			// ac.tearDown(...) has been invoked.
+			ac.mu.Unlock()
 			newTransport.Close()
-			return ErrClientConnClosing
+			return errConnClosing
 		}
-		cc.state = Ready
-		cc.stateCV.Broadcast()
-		cc.transport = newTransport
-		if cc.ready != nil {
-			close(cc.ready)
-			cc.ready = nil
+		ac.state = Ready
+		ac.stateCV.Broadcast()
+		ac.transport = newTransport
+		if ac.ready != nil {
+			close(ac.ready)
+			ac.ready = nil
 		}
-		cc.mu.Unlock()
+		ac.down = ac.cc.balancer.Up(ac.addr)
+		ac.mu.Unlock()
 		return nil
 	}
 }
 
-func (cc *Conn) reconnect() bool {
-	cc.mu.Lock()
-	if cc.state == Shutdown {
-		// cc.Close() has been invoked.
-		cc.mu.Unlock()
-		return false
-	}
-	cc.state = TransientFailure
-	cc.stateCV.Broadcast()
-	cc.mu.Unlock()
-	if err := cc.resetTransport(true); err != nil {
-		// The ClientConn is closing.
-		cc.mu.Lock()
-		cc.printf("transport exiting: %v", err)
-		cc.mu.Unlock()
-		grpclog.Printf("grpc: Conn.transportMonitor exits due to: %v", err)
-		return false
-	}
-	return true
-}
-
 // Run in a goroutine to track the error in transport and create the
 // new transport if an error happens. It returns when the channel is closing.
-func (cc *Conn) transportMonitor() {
+func (ac *addrConn) transportMonitor() {
 	for {
+		ac.mu.Lock()
+		t := ac.transport
+		ac.mu.Unlock()
 		select {
 		// shutdownChan is needed to detect the teardown when
-		// the ClientConn is idle (i.e., no RPC in flight).
-		case <-cc.shutdownChan:
+		// the addrConn is idle (i.e., no RPC in flight).
+		case <-ac.shutdownChan:
 			return
-		case <-cc.resetChan:
-			if !cc.reconnect() {
+		case <-t.Error():
+			ac.mu.Lock()
+			if ac.state == Shutdown {
+				// ac.tearDown(...) has been invoked.
+				ac.mu.Unlock()
 				return
 			}
-		case <-cc.transport.Error():
-			if !cc.reconnect() {
+			ac.state = TransientFailure
+			ac.stateCV.Broadcast()
+			ac.mu.Unlock()
+			if err := ac.resetTransport(true); err != nil {
+				ac.mu.Lock()
+				ac.printf("transport exiting: %v", err)
+				ac.mu.Unlock()
+				grpclog.Printf("grpc: addrConn.transportMonitor exits due to: %v", err)
 				return
 			}
-			// Tries to drain reset signal if there is any since it is out-dated.
-			select {
-			case <-cc.resetChan:
-			default:
-			}
 		}
 	}
 }
 
-// Wait blocks until i) the new transport is up or ii) ctx is done or iii) cc is closed.
-func (cc *Conn) Wait(ctx context.Context) (transport.ClientTransport, error) {
+// wait blocks until i) the new transport is up or ii) ctx is done or iii) ac is closed.
+func (ac *addrConn) wait(ctx context.Context) (transport.ClientTransport, error) {
 	for {
-		cc.mu.Lock()
+		ac.mu.Lock()
 		switch {
-		case cc.state == Shutdown:
-			cc.mu.Unlock()
-			return nil, ErrClientConnClosing
-		case cc.state == Ready:
-			ct := cc.transport
-			cc.mu.Unlock()
+		case ac.state == Shutdown:
+			ac.mu.Unlock()
+			return nil, errConnClosing
+		case ac.state == Ready:
+			ct := ac.transport
+			ac.mu.Unlock()
 			return ct, nil
 		default:
-			ready := cc.ready
+			ready := ac.ready
 			if ready == nil {
 				ready = make(chan struct{})
-				cc.ready = ready
+				ac.ready = ready
 			}
-			cc.mu.Unlock()
+			ac.mu.Unlock()
 			select {
 			case <-ctx.Done():
 				return nil, transport.ContextErr(ctx.Err())
@@ -558,32 +678,46 @@
 	}
 }
 
-// Close starts to tear down the Conn. Returns ErrClientConnClosing if
-// it has been closed (mostly due to dial time-out).
+// tearDown starts to tear down the addrConn.
 // TODO(zhaoq): Make this synchronous to avoid unbounded memory consumption in
-// some edge cases (e.g., the caller opens and closes many ClientConn's in a
+// some edge cases (e.g., the caller opens and closes many addrConn's in a
 // tight loop.
-func (cc *Conn) Close() error {
-	cc.mu.Lock()
-	defer cc.mu.Unlock()
-	if cc.state == Shutdown {
-		return ErrClientConnClosing
+func (ac *addrConn) tearDown(err error) {
+	ac.mu.Lock()
+	defer func() {
+		ac.mu.Unlock()
+		ac.cc.mu.Lock()
+		if ac.cc.conns != nil {
+			delete(ac.cc.conns, ac.addr)
+		}
+		ac.cc.mu.Unlock()
+	}()
+	if ac.state == Shutdown {
+		return
 	}
-	cc.state = Shutdown
-	cc.stateCV.Broadcast()
-	if cc.events != nil {
-		cc.events.Finish()
-		cc.events = nil
+	ac.state = Shutdown
+	if ac.down != nil {
+		ac.down(downErrorf(false, false, "%v", err))
+		ac.down = nil
 	}
-	if cc.ready != nil {
-		close(cc.ready)
-		cc.ready = nil
+	ac.stateCV.Broadcast()
+	if ac.events != nil {
+		ac.events.Finish()
+		ac.events = nil
 	}
-	if cc.transport != nil {
-		cc.transport.Close()
+	if ac.ready != nil {
+		close(ac.ready)
+		ac.ready = nil
 	}
-	if cc.shutdownChan != nil {
-		close(cc.shutdownChan)
+	if ac.transport != nil {
+		if err == errConnDrain {
+			ac.transport.GracefulClose()
+		} else {
+			ac.transport.Close()
+		}
 	}
-	return nil
+	if ac.shutdownChan != nil {
+		close(ac.shutdownChan)
+	}
+	return
 }
diff --git a/go/src/google.golang.org/grpc/clientconn_test.go b/go/src/google.golang.org/grpc/clientconn_test.go
index 8eb1a22..d60a3ae 100644
--- a/go/src/google.golang.org/grpc/clientconn_test.go
+++ b/go/src/google.golang.org/grpc/clientconn_test.go
@@ -72,11 +72,50 @@
 		t.Fatalf("Failed to create credentials %v", err)
 	}
 	// Two conflicting credential configurations
-	if _, err := Dial("Non-Existent.Server:80", WithTransportCredentials(creds), WithTimeout(time.Millisecond), WithBlock(), WithInsecure()); err != ErrCredentialsMisuse {
-		t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, ErrCredentialsMisuse)
+	if _, err := Dial("Non-Existent.Server:80", WithTransportCredentials(creds), WithTimeout(time.Millisecond), WithBlock(), WithInsecure()); err != errCredentialsMisuse {
+		t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errCredentialsMisuse)
 	}
 	// security info on insecure connection
-	if _, err := Dial("Non-Existent.Server:80", WithPerRPCCredentials(creds), WithTimeout(time.Millisecond), WithBlock(), WithInsecure()); err != ErrCredentialsMisuse {
-		t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, ErrCredentialsMisuse)
+	if _, err := Dial("Non-Existent.Server:80", WithPerRPCCredentials(creds), WithTimeout(time.Millisecond), WithBlock(), WithInsecure()); err != errCredentialsMisuse {
+		t.Fatalf("Dial(_, _) = _, %v, want _, %v", err, errCredentialsMisuse)
+	}
+}
+
+func TestWithBackoffConfigDefault(t *testing.T) {
+	testBackoffConfigSet(t, &DefaultBackoffConfig)
+}
+
+func TestWithBackoffConfig(t *testing.T) {
+	b := BackoffConfig{MaxDelay: DefaultBackoffConfig.MaxDelay / 2}
+	expected := b
+	setDefaults(&expected) // defaults should be set
+	testBackoffConfigSet(t, &expected, WithBackoffConfig(b))
+}
+
+func TestWithBackoffMaxDelay(t *testing.T) {
+	md := DefaultBackoffConfig.MaxDelay / 2
+	expected := BackoffConfig{MaxDelay: md}
+	setDefaults(&expected)
+	testBackoffConfigSet(t, &expected, WithBackoffMaxDelay(md))
+}
+
+func testBackoffConfigSet(t *testing.T, expected *BackoffConfig, opts ...DialOption) {
+	opts = append(opts, WithInsecure())
+	conn, err := Dial("foo:80", opts...)
+	if err != nil {
+		t.Fatalf("unexpected error dialing connection: %v", err)
+	}
+
+	if conn.dopts.bs == nil {
+		t.Fatalf("backoff config not set")
+	}
+
+	actual, ok := conn.dopts.bs.(BackoffConfig)
+	if !ok {
+		t.Fatalf("unexpected type of backoff config: %#v", conn.dopts.bs)
+	}
+
+	if actual != *expected {
+		t.Fatalf("unexpected backoff config on connection: %v, want %v", actual, expected)
 	}
 }
diff --git a/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.pb.go b/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.pb.go
index 97df856..eae485c 100644
--- a/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.pb.go
+++ b/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.pb.go
@@ -1,12 +1,12 @@
 // Code generated by protoc-gen-go.
-// source: helloworld.proto
+// source: examples/helloworld/helloworld/helloworld.proto
 // DO NOT EDIT!
 
 /*
 Package helloworld is a generated protocol buffer package.
 
 It is generated from these files:
-	helloworld.proto
+	examples/helloworld/helloworld/helloworld.proto
 
 It has these top-level messages:
 	HelloRequest
@@ -61,6 +61,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for Greeter service
 
 type GreeterClient interface {
@@ -96,16 +100,22 @@
 	s.RegisterService(&_Greeter_serviceDesc, srv)
 }
 
-func _Greeter_SayHello_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Greeter_SayHello_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(HelloRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(GreeterServer).SayHello(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(GreeterServer).SayHello(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/helloworld.Greeter/SayHello",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(GreeterServer).SayHello(ctx, req.(*HelloRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _Greeter_serviceDesc = grpc.ServiceDesc{
@@ -121,17 +131,16 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 181 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x12, 0xc8, 0x48, 0xcd, 0xc9,
-	0xc9, 0x2f, 0xcf, 0x2f, 0xca, 0x49, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0x88,
-	0x28, 0x29, 0x71, 0xf1, 0x78, 0x80, 0x78, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x42,
-	0x5c, 0x2c, 0x79, 0x89, 0xb9, 0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x60, 0xb6, 0x92,
-	0x1a, 0x17, 0x17, 0x54, 0x4d, 0x41, 0x4e, 0xa5, 0x90, 0x04, 0x17, 0x7b, 0x6e, 0x6a, 0x71, 0x71,
-	0x62, 0x3a, 0x4c, 0x11, 0x8c, 0x6b, 0xe4, 0xc9, 0xc5, 0xee, 0x5e, 0x94, 0x9a, 0x5a, 0x92, 0x5a,
-	0x24, 0x64, 0xc7, 0xc5, 0x11, 0x9c, 0x58, 0x09, 0xd6, 0x25, 0x24, 0xa1, 0x87, 0xe4, 0x02, 0x64,
-	0xcb, 0xa4, 0xc4, 0xb0, 0xc8, 0x00, 0xad, 0x50, 0x62, 0x70, 0x32, 0xe3, 0x92, 0xce, 0xcc, 0xd7,
-	0x4b, 0x2f, 0x2a, 0x48, 0xd6, 0x4b, 0xad, 0x48, 0xcc, 0x2d, 0xc8, 0x49, 0x2d, 0x46, 0x52, 0xeb,
-	0xc4, 0x0f, 0x56, 0x1c, 0x0e, 0x62, 0x07, 0x80, 0xbc, 0x14, 0xc0, 0xb8, 0x88, 0x89, 0xd9, 0xc3,
-	0x27, 0x3c, 0x89, 0x0d, 0xec, 0x43, 0x63, 0x40, 0x00, 0x00, 0x00, 0xff, 0xff, 0xdf, 0x0a, 0xdc,
-	0xe8, 0xf5, 0x00, 0x00, 0x00,
+	// 175 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xd2, 0x4f, 0xad, 0x48, 0xcc,
+	0x2d, 0xc8, 0x49, 0x2d, 0xd6, 0xcf, 0x48, 0xcd, 0xc9, 0xc9, 0x2f, 0xcf, 0x2f, 0xca, 0x49, 0xc1,
+	0xce, 0xd4, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0x88, 0x28, 0xc9, 0x70, 0xf1, 0x78,
+	0x80, 0x78, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x3c, 0x5c, 0x2c, 0x79, 0x89, 0xb9,
+	0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x4a, 0xb2, 0x5c, 0x5c, 0x50, 0xd9, 0x82, 0x9c, 0x4a,
+	0x21, 0x7e, 0x2e, 0xf6, 0xdc, 0xd4, 0xe2, 0xe2, 0xc4, 0x74, 0xa8, 0xb4, 0x91, 0x27, 0x17, 0xbb,
+	0x7b, 0x51, 0x6a, 0x6a, 0x49, 0x6a, 0x91, 0x90, 0x1d, 0x17, 0x47, 0x70, 0x62, 0x25, 0x58, 0xb1,
+	0x90, 0x84, 0x1e, 0x92, 0x95, 0xc8, 0xa6, 0x4b, 0x89, 0x61, 0x91, 0x01, 0x9a, 0xac, 0xc4, 0xe0,
+	0x64, 0xc0, 0x25, 0x9d, 0x99, 0xaf, 0x97, 0x5e, 0x54, 0x90, 0xac, 0x07, 0xf3, 0x0e, 0x92, 0x5a,
+	0x27, 0x7e, 0xb0, 0xe2, 0x70, 0x10, 0x3b, 0x00, 0xe4, 0x87, 0x00, 0xc6, 0x24, 0x36, 0xb0, 0x67,
+	0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x20, 0x7e, 0x28, 0x45, 0xff, 0x00, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.proto b/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.proto
index 0bee1fc..c3ddd4a 100644
--- a/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.proto
+++ b/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.proto
@@ -32,7 +32,6 @@
 option java_multiple_files = true;
 option java_package = "io.grpc.examples.helloworld";
 option java_outer_classname = "HelloWorldProto";
-option objc_class_prefix = "HLW";
 
 package helloworld;
 
diff --git a/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.pb.go b/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.pb.go
index 4f5f5c1..cc8cb5e 100644
--- a/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.pb.go
+++ b/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.pb.go
@@ -1,12 +1,12 @@
 // Code generated by protoc-gen-go.
-// source: route_guide.proto
+// source: examples/route_guide/routeguide/route_guide.proto
 // DO NOT EDIT!
 
 /*
 Package routeguide is a generated protocol buffer package.
 
 It is generated from these files:
-	route_guide.proto
+	examples/route_guide/routeguide/route_guide.proto
 
 It has these top-level messages:
 	Point
@@ -152,6 +152,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for RouteGuide service
 
 type RouteGuideClient interface {
@@ -328,16 +332,22 @@
 	s.RegisterService(&_RouteGuide_serviceDesc, srv)
 }
 
-func _RouteGuide_GetFeature_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _RouteGuide_GetFeature_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(Point)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(RouteGuideServer).GetFeature(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(RouteGuideServer).GetFeature(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/routeguide.RouteGuide/GetFeature",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(RouteGuideServer).GetFeature(ctx, req.(*Point))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 func _RouteGuide_ListFeatures_Handler(srv interface{}, stream grpc.ServerStream) error {
@@ -443,31 +453,29 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 412 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x84, 0x53, 0xd1, 0x6a, 0xa3, 0x40,
-	0x14, 0xcd, 0x98, 0x64, 0xb3, 0x5e, 0x5d, 0x96, 0xcc, 0xb2, 0x20, 0xd9, 0x85, 0xb6, 0xf6, 0x25,
-	0x2f, 0x95, 0x90, 0x42, 0x1e, 0x5b, 0x9a, 0x40, 0xf3, 0x12, 0x4a, 0x6a, 0xf3, 0x1e, 0xa6, 0x3a,
-	0x35, 0x03, 0xea, 0x88, 0x8e, 0xd0, 0x7e, 0x40, 0xbf, 0xa0, 0x7f, 0xd0, 0x2f, 0xed, 0x38, 0x6a,
-	0x62, 0xda, 0x84, 0xbe, 0x39, 0xe7, 0x9e, 0x73, 0xef, 0xb9, 0xe7, 0x22, 0xf4, 0x53, 0x9e, 0x0b,
-	0xba, 0x0e, 0x72, 0xe6, 0x53, 0x27, 0x49, 0xb9, 0xe0, 0x18, 0x14, 0xa4, 0x10, 0xfb, 0x06, 0xba,
-	0x4b, 0xce, 0x62, 0x81, 0x07, 0xf0, 0x33, 0x24, 0x82, 0x89, 0xdc, 0xa7, 0x16, 0x3a, 0x45, 0xc3,
-	0xae, 0xbb, 0x7d, 0xe3, 0xff, 0xa0, 0x87, 0x3c, 0x0e, 0xca, 0xa2, 0xa6, 0x8a, 0x3b, 0xc0, 0xbe,
-	0x07, 0xdd, 0xa5, 0x9e, 0x20, 0x71, 0x10, 0x52, 0x7c, 0x06, 0x5a, 0xc8, 0x55, 0x03, 0x63, 0xdc,
-	0x77, 0x76, 0x83, 0x1c, 0x35, 0xc5, 0x95, 0xc5, 0x82, 0xb2, 0x61, 0xaa, 0xcd, 0x61, 0xca, 0x86,
-	0xd9, 0x0b, 0xe8, 0xdd, 0x52, 0x22, 0xf2, 0x94, 0x62, 0x0c, 0x9d, 0x98, 0x44, 0xa5, 0x27, 0xdd,
-	0x55, 0xdf, 0xf8, 0x42, 0x7a, 0xe5, 0x9e, 0x74, 0xc7, 0xe3, 0xe3, 0x7d, 0xb6, 0x14, 0x7b, 0x25,
-	0x0d, 0x16, 0xd5, 0x3b, 0x2e, 0xf6, 0xb5, 0xe8, 0x5b, 0x2d, 0xb6, 0xa0, 0x17, 0xd1, 0x2c, 0x23,
-	0x41, 0xb9, 0xb8, 0xee, 0xd6, 0x4f, 0xfb, 0x0d, 0x81, 0xa9, 0xda, 0x3e, 0xe4, 0x51, 0x44, 0xd2,
-	0x17, 0x7c, 0x02, 0x46, 0x52, 0xa8, 0xd7, 0x1e, 0xcf, 0x63, 0x51, 0x85, 0x08, 0x0a, 0x9a, 0x15,
-	0x08, 0x3e, 0x87, 0x5f, 0x4f, 0xe5, 0x56, 0x15, 0xa5, 0x8c, 0xd2, 0xac, 0xc0, 0x92, 0x24, 0xef,
-	0xe0, 0xb3, 0x4c, 0xa6, 0xe9, 0x51, 0xab, 0x5d, 0xde, 0xa1, 0x7e, 0xcb, 0xe4, 0x4c, 0x1a, 0x92,
-	0x24, 0xa3, 0xfe, 0x5a, 0x30, 0x99, 0x49, 0x47, 0xd5, 0x8d, 0x0a, 0x5b, 0x49, 0x68, 0xfc, 0xaa,
-	0x01, 0x28, 0x57, 0xf3, 0x62, 0x1d, 0x3c, 0x01, 0x98, 0x53, 0x51, 0x67, 0xf9, 0x75, 0xd3, 0xc1,
-	0x9f, 0x26, 0x54, 0xf1, 0xec, 0x16, 0xbe, 0x02, 0x73, 0x21, 0xa7, 0x56, 0x40, 0x86, 0xff, 0x36,
-	0x69, 0xdb, 0x6b, 0x1f, 0x51, 0x8f, 0x90, 0xd4, 0x1b, 0x92, 0xc5, 0x53, 0x5f, 0x79, 0x39, 0x34,
-	0xd8, 0xda, 0xeb, 0xd8, 0xc8, 0xd1, 0x6e, 0x0d, 0x11, 0xbe, 0xae, 0x4e, 0x36, 0xdb, 0x10, 0xf1,
-	0x69, 0x78, 0x7d, 0xc9, 0xc1, 0x61, 0xb8, 0x90, 0x8f, 0xd0, 0x74, 0x02, 0xff, 0x18, 0x77, 0x82,
-	0x34, 0xf1, 0x1c, 0xfa, 0x4c, 0xa2, 0x24, 0xa4, 0x59, 0x83, 0x3e, 0xfd, 0xbd, 0xcb, 0x68, 0x59,
-	0xfc, 0x13, 0x4b, 0xf4, 0xae, 0xb5, 0xdd, 0xd5, 0xfc, 0xf1, 0x87, 0xfa, 0x45, 0x2e, 0x3f, 0x02,
-	0x00, 0x00, 0xff, 0xff, 0xf3, 0xe2, 0x76, 0x5e, 0x37, 0x03, 0x00, 0x00,
+	// 374 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x74, 0x52, 0x5d, 0x4b, 0xeb, 0x40,
+	0x10, 0x6d, 0x7a, 0xdb, 0xdb, 0x9b, 0x69, 0x2e, 0xb5, 0x5b, 0x0b, 0xa5, 0x22, 0x48, 0x7c, 0xe9,
+	0x83, 0xc4, 0x5a, 0xc1, 0x27, 0x51, 0xac, 0x60, 0x11, 0x44, 0x8a, 0xfe, 0x80, 0xb2, 0x4d, 0xc6,
+	0x74, 0x21, 0xc9, 0x86, 0x64, 0x03, 0xfa, 0x03, 0xfc, 0xdf, 0xee, 0x47, 0x6a, 0x53, 0x6d, 0xdf,
+	0x96, 0x33, 0xe7, 0xcc, 0x9c, 0x39, 0x3b, 0x70, 0x81, 0xef, 0x34, 0x4e, 0x23, 0xcc, 0xcf, 0x33,
+	0x5e, 0x08, 0x5c, 0x84, 0x05, 0x0b, 0xd0, 0xbc, 0x2b, 0x4f, 0x03, 0x7b, 0x69, 0xc6, 0x05, 0x27,
+	0xb0, 0xa9, 0xba, 0x67, 0xd0, 0x9c, 0x73, 0x96, 0x08, 0x72, 0x00, 0xff, 0x22, 0x2a, 0x98, 0x28,
+	0x02, 0x1c, 0x58, 0x27, 0xd6, 0xa8, 0x49, 0xba, 0x60, 0x47, 0x3c, 0x09, 0x0d, 0x54, 0x57, 0x90,
+	0xfb, 0x08, 0xf6, 0x0b, 0xfa, 0x82, 0x26, 0x61, 0x84, 0xe4, 0x18, 0xea, 0x11, 0xd7, 0xdc, 0xf6,
+	0xa4, 0xeb, 0x6d, 0x7a, 0x7a, 0xa6, 0xa1, 0x2c, 0xaf, 0x98, 0xd6, 0xed, 0x2a, 0xbb, 0xd7, 0xd0,
+	0x7a, 0x40, 0x2a, 0x8a, 0x0c, 0x89, 0x03, 0x8d, 0x84, 0xc6, 0x66, 0xac, 0x4d, 0x4e, 0xa5, 0x11,
+	0xee, 0x4b, 0x2b, 0x3c, 0xd9, 0xaf, 0xbe, 0x93, 0x46, 0x14, 0xf6, 0xcc, 0x05, 0x6e, 0x29, 0xf6,
+	0xda, 0xe9, 0x40, 0x2b, 0xc6, 0x3c, 0xa7, 0xa1, 0xd9, 0xc5, 0x76, 0x97, 0xe0, 0xe8, 0x16, 0xaf,
+	0x45, 0x1c, 0xd3, 0xec, 0x83, 0xf4, 0xa0, 0x9d, 0x2a, 0xe6, 0xc2, 0xe7, 0x45, 0x22, 0xca, 0x0c,
+	0xfa, 0xf0, 0xff, 0xcd, 0xb8, 0x2c, 0x61, 0x9d, 0x83, 0x0a, 0x2b, 0x60, 0xb9, 0xcc, 0xc1, 0xc7,
+	0xc1, 0x1f, 0x8d, 0x1c, 0x82, 0x83, 0x11, 0x4d, 0x73, 0x0c, 0x16, 0x82, 0xc9, 0x5d, 0x1a, 0x0a,
+	0x9d, 0x7c, 0xd6, 0x01, 0xf4, 0x90, 0x99, 0x72, 0x42, 0xae, 0x00, 0x66, 0x28, 0xd6, 0x6b, 0xff,
+	0x36, 0x39, 0xec, 0x55, 0xa1, 0x92, 0xe7, 0xd6, 0xc8, 0x0d, 0x38, 0x4f, 0x72, 0x5c, 0x09, 0xe4,
+	0xa4, 0x5f, 0xa5, 0x7d, 0x7f, 0xc8, 0x1e, 0xf5, 0xd8, 0x92, 0xfa, 0xb6, 0x64, 0xf1, 0x2c, 0xd0,
+	0x5e, 0x76, 0x0d, 0x1e, 0x6c, 0x75, 0xac, 0xc4, 0xe2, 0xd6, 0x46, 0x16, 0xb9, 0x2d, 0xd3, 0xbe,
+	0x5f, 0x51, 0xf1, 0x63, 0xf8, 0xfa, 0x13, 0x86, 0xbb, 0x61, 0x25, 0x1f, 0x5b, 0xd3, 0x31, 0x1c,
+	0x31, 0xee, 0x85, 0x59, 0xea, 0x7b, 0xeb, 0x73, 0xad, 0xd0, 0xa7, 0x9d, 0x4d, 0x46, 0x73, 0x75,
+	0xa1, 0x73, 0x6b, 0xf9, 0x57, 0x9f, 0xea, 0xe5, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff, 0x80, 0x13,
+	0xe7, 0xb8, 0xdf, 0x02, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.proto b/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.proto
index 12c4495..5a782aa 100644
--- a/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.proto
+++ b/go/src/google.golang.org/grpc/examples/route_guide/routeguide/route_guide.proto
@@ -32,7 +32,6 @@
 option java_multiple_files = true;
 option java_package = "io.grpc.examples.routeguide";
 option java_outer_classname = "RouteGuideProto";
-option objc_class_prefix = "RTG";
 
 package routeguide;
 
diff --git a/go/src/google.golang.org/grpc/examples/route_guide/server/server.go b/go/src/google.golang.org/grpc/examples/route_guide/server/server.go
index 09b3942..c8be497 100644
--- a/go/src/google.golang.org/grpc/examples/route_guide/server/server.go
+++ b/go/src/google.golang.org/grpc/examples/route_guide/server/server.go
@@ -82,7 +82,7 @@
 	return &pb.Feature{"", point}, nil
 }
 
-// ListFeatures lists all features comtained within the given bounding Rectangle.
+// ListFeatures lists all features contained within the given bounding Rectangle.
 func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
 	for _, feature := range s.savedFeatures {
 		if inRange(feature.Location, rect) {
diff --git a/go/src/google.golang.org/grpc/health/grpc_health_v1/health.pb.go b/go/src/google.golang.org/grpc/health/grpc_health_v1/health.pb.go
index bfe238e..d9550c7 100644
--- a/go/src/google.golang.org/grpc/health/grpc_health_v1/health.pb.go
+++ b/go/src/google.golang.org/grpc/health/grpc_health_v1/health.pb.go
@@ -1,12 +1,12 @@
 // Code generated by protoc-gen-go.
-// source: health.proto
+// source: health/grpc_health_v1/health.proto
 // DO NOT EDIT!
 
 /*
 Package grpc_health_v1 is a generated protocol buffer package.
 
 It is generated from these files:
-	health.proto
+	health/grpc_health_v1/health.proto
 
 It has these top-level messages:
 	HealthCheckRequest
@@ -86,6 +86,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for Health service
 
 type HealthClient interface {
@@ -119,16 +123,22 @@
 	s.RegisterService(&_Health_serviceDesc, srv)
 }
 
-func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(HealthCheckRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(HealthServer).Check(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(HealthServer).Check(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.health.v1.Health/Check",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(HealthServer).Check(ctx, req.(*HealthCheckRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 var _Health_serviceDesc = grpc.ServiceDesc{
@@ -145,18 +155,18 @@
 
 var fileDescriptor0 = []byte{
 	// 209 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xc9, 0x48, 0x4d, 0xcc,
-	0x29, 0xc9, 0xd0, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12, 0x4e, 0x2f, 0x2a, 0x48, 0xd6, 0x83,
-	0x0a, 0x95, 0x19, 0x26, 0xe6, 0x14, 0x64, 0x24, 0x2a, 0xe9, 0x71, 0x09, 0x79, 0x80, 0x45, 0x9c,
-	0x33, 0x52, 0x93, 0xb3, 0x83, 0x52, 0x0b, 0x4b, 0x53, 0x8b, 0x4b, 0x84, 0x24, 0xb8, 0xd8, 0x8b,
-	0x53, 0x8b, 0xca, 0x32, 0x93, 0x53, 0x25, 0x18, 0x15, 0x18, 0x35, 0x38, 0x83, 0x60, 0x5c, 0xa5,
-	0x85, 0x8c, 0x5c, 0xc2, 0x28, 0x1a, 0x8a, 0x0b, 0xf2, 0xf3, 0x8a, 0x53, 0x85, 0xfc, 0xb8, 0xd8,
-	0x8a, 0x4b, 0x12, 0x4b, 0x4a, 0x8b, 0xc1, 0x1a, 0xf8, 0x8c, 0xcc, 0xf4, 0xb0, 0xd8, 0xa6, 0x87,
-	0x45, 0xa7, 0x5e, 0x30, 0xc8, 0xe4, 0xbc, 0xf4, 0x60, 0xb0, 0xee, 0x20, 0xa8, 0x29, 0x4a, 0x56,
-	0x5c, 0xbc, 0x28, 0x12, 0x42, 0xdc, 0x5c, 0xec, 0xa1, 0x7e, 0xde, 0x7e, 0xfe, 0xe1, 0x7e, 0x02,
-	0x0c, 0x20, 0x4e, 0xb0, 0x6b, 0x50, 0x98, 0xa7, 0x9f, 0xbb, 0x00, 0xa3, 0x10, 0x3f, 0x17, 0xb7,
-	0x9f, 0x7f, 0x48, 0x3c, 0x4c, 0x80, 0xc9, 0x28, 0x85, 0x8b, 0x0d, 0x62, 0x91, 0x50, 0x14, 0x17,
-	0x2b, 0xd8, 0x32, 0x21, 0x75, 0xc2, 0xce, 0x01, 0xfb, 0x5c, 0x4a, 0x83, 0x58, 0x77, 0x27, 0xb1,
-	0x81, 0x43, 0xd5, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xe1, 0x3f, 0xd0, 0xe1, 0x65, 0x01, 0x00,
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x52, 0xca, 0x48, 0x4d, 0xcc,
+	0x29, 0xc9, 0xd0, 0x4f, 0x2f, 0x2a, 0x48, 0x8e, 0x87, 0xb0, 0xe3, 0xcb, 0x0c, 0xf5, 0x21, 0x2c,
+	0xbd, 0x82, 0xa2, 0xfc, 0x92, 0x7c, 0x21, 0x3e, 0x90, 0xa4, 0x1e, 0x54, 0xa8, 0xcc, 0x50, 0x49,
+	0x95, 0x4b, 0xc8, 0x03, 0xcc, 0x71, 0xce, 0x48, 0x4d, 0xce, 0x0e, 0x4a, 0x2d, 0x2c, 0x4d, 0x2d,
+	0x2e, 0x11, 0xe2, 0xe7, 0x62, 0x2f, 0x4e, 0x2d, 0x2a, 0xcb, 0x4c, 0x4e, 0x95, 0x60, 0x54, 0x60,
+	0xd4, 0xe0, 0x54, 0x9a, 0xc2, 0xc8, 0x25, 0x8c, 0xa2, 0xae, 0xb8, 0x20, 0x3f, 0xaf, 0x38, 0x55,
+	0xc8, 0x91, 0x8b, 0xad, 0xb8, 0x24, 0xb1, 0xa4, 0xb4, 0x18, 0xac, 0x8e, 0xcf, 0xc8, 0x50, 0x0f,
+	0xd5, 0x7c, 0x3d, 0x2c, 0x9a, 0xf4, 0x82, 0x41, 0x46, 0xe7, 0xa5, 0x07, 0x83, 0x35, 0x2a, 0x59,
+	0x71, 0xf1, 0xa2, 0x08, 0x08, 0x71, 0x73, 0xb1, 0x87, 0xfa, 0x79, 0xfb, 0xf9, 0x87, 0xfb, 0x09,
+	0x30, 0x80, 0x38, 0xc1, 0xae, 0x41, 0x61, 0x9e, 0x7e, 0xee, 0x02, 0x8c, 0x40, 0x67, 0x71, 0xfb,
+	0xf9, 0x87, 0xc4, 0xc3, 0x04, 0x98, 0x8c, 0xa2, 0xb8, 0xd8, 0x20, 0x16, 0x08, 0x05, 0x70, 0xb1,
+	0x82, 0x2d, 0x11, 0x52, 0xc2, 0xeb, 0x02, 0xb0, 0xf7, 0xa4, 0x94, 0x89, 0x70, 0x65, 0x12, 0x1b,
+	0x38, 0xc0, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa9, 0xf7, 0x2e, 0x1f, 0x56, 0x01, 0x00,
 	0x00,
 }
diff --git a/go/src/google.golang.org/grpc/interceptor.go b/go/src/google.golang.org/grpc/interceptor.go
new file mode 100644
index 0000000..588f59e
--- /dev/null
+++ b/go/src/google.golang.org/grpc/interceptor.go
@@ -0,0 +1,74 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+	"golang.org/x/net/context"
+)
+
+// UnaryServerInfo consists of various information about a unary RPC on
+// server side. All per-rpc information may be mutated by the interceptor.
+type UnaryServerInfo struct {
+	// Server is the service implementation the user provides. This is read-only.
+	Server interface{}
+	// FullMethod is the full RPC method string, i.e., /package.service/method.
+	FullMethod string
+}
+
+// UnaryHandler defines the handler invoked by UnaryServerInterceptor to complete the normal
+// execution of a unary RPC.
+type UnaryHandler func(ctx context.Context, req interface{}) (interface{}, error)
+
+// UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info
+// contains all the information of this RPC the interceptor can operate on. And handler is the wrapper
+// of the service method implementation. It is the responsibility of the interceptor to invoke handler
+// to complete the RPC.
+type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error)
+
+// StreamServerInfo consists of various information about a streaming RPC on
+// server side. All per-rpc information may be mutated by the interceptor.
+type StreamServerInfo struct {
+	// FullMethod is the full RPC method string, i.e., /package.service/method.
+	FullMethod string
+	// IsClientStream indicates whether the RPC is a client streaming RPC.
+	IsClientStream bool
+	// IsServerStream indicates whether the RPC is a server streaming RPC.
+	IsServerStream bool
+}
+
+// StreamServerInterceptor provides a hook to intercept the execution of a streaming RPC on the server.
+// info contains all the information of this RPC the interceptor can operate on. And handler is the
+// service method implementation. It is the responsibility of the interceptor to invoke handler to
+// complete the RPC.
+type StreamServerInterceptor func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error
diff --git a/go/src/google.golang.org/grpc/interop/grpc_testing/test.pb.go b/go/src/google.golang.org/grpc/interop/grpc_testing/test.pb.go
index 7b0803f..6437ad6 100755
--- a/go/src/google.golang.org/grpc/interop/grpc_testing/test.pb.go
+++ b/go/src/google.golang.org/grpc/interop/grpc_testing/test.pb.go
@@ -1,12 +1,12 @@
 // Code generated by protoc-gen-go.
-// source: test.proto
+// source: interop/grpc_testing/test.proto
 // DO NOT EDIT!
 
 /*
 Package grpc_testing is a generated protocol buffer package.
 
 It is generated from these files:
-	test.proto
+	interop/grpc_testing/test.proto
 
 It has these top-level messages:
 	Empty
@@ -356,6 +356,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for TestService service
 
 type TestServiceClient interface {
@@ -564,28 +568,40 @@
 	s.RegisterService(&_TestService_serviceDesc, srv)
 }
 
-func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(Empty)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(TestServiceServer).EmptyCall(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(TestServiceServer).EmptyCall(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.TestService/EmptyCall",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(SimpleRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(TestServiceServer).UnaryCall(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(TestServiceServer).UnaryCall(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.TestService/UnaryCall",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error {
@@ -727,41 +743,38 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 567 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x54, 0x51, 0x6f, 0xd2, 0x50,
-	0x14, 0xb6, 0x03, 0x64, 0x1c, 0x58, 0x43, 0x0e, 0x59, 0x64, 0x9d, 0x89, 0x4b, 0x7d, 0xb0, 0x9a,
-	0x88, 0x86, 0x44, 0x1f, 0x35, 0x73, 0x63, 0x71, 0x09, 0x03, 0x6c, 0xe1, 0x99, 0x5c, 0xe1, 0x0e,
-	0x9b, 0x94, 0xb6, 0xb6, 0xb7, 0x46, 0x7c, 0xf0, 0x8f, 0xf9, 0x67, 0xfc, 0x11, 0xfe, 0x00, 0xef,
-	0xbd, 0x6d, 0xa1, 0x40, 0x17, 0x99, 0xc6, 0xbd, 0xb5, 0xdf, 0xf9, 0xce, 0x77, 0xbe, 0xef, 0x9e,
-	0xdb, 0x02, 0x30, 0x1a, 0xb2, 0x96, 0x1f, 0x78, 0xcc, 0xc3, 0xda, 0x2c, 0xf0, 0x27, 0x2d, 0x01,
-	0xd8, 0xee, 0x4c, 0x2f, 0x43, 0xa9, 0x33, 0xf7, 0xd9, 0x42, 0xef, 0x42, 0x79, 0x40, 0x16, 0x8e,
-	0x47, 0xa6, 0xf8, 0x1c, 0x8a, 0x6c, 0xe1, 0xd3, 0xa6, 0x72, 0xa2, 0x18, 0x6a, 0xfb, 0xa8, 0x95,
-	0x6d, 0x68, 0x25, 0xa4, 0x21, 0x27, 0x98, 0x92, 0x86, 0x08, 0xc5, 0x8f, 0xde, 0x74, 0xd1, 0xdc,
-	0xe3, 0xf4, 0x9a, 0x29, 0x9f, 0xf5, 0x5f, 0x0a, 0x1c, 0x58, 0xf6, 0xdc, 0x77, 0xa8, 0x49, 0x3f,
-	0x47, 0xbc, 0x15, 0xdf, 0xc0, 0x41, 0x40, 0x43, 0xdf, 0x73, 0x43, 0x3a, 0xde, 0x4d, 0xbd, 0x96,
-	0xf2, 0xc5, 0x1b, 0x3e, 0xce, 0xf4, 0x87, 0xf6, 0x37, 0x2a, 0xc7, 0x95, 0x56, 0x24, 0x8b, 0x63,
-	0xf8, 0x02, 0xca, 0x7e, 0xac, 0xd0, 0x2c, 0xf0, 0x72, 0xb5, 0x7d, 0x98, 0x2b, 0x6f, 0xa6, 0x2c,
-	0xa1, 0x7a, 0x6d, 0x3b, 0xce, 0x38, 0x0a, 0x69, 0xe0, 0x92, 0x39, 0x6d, 0x16, 0x79, 0xdb, 0xbe,
-	0x59, 0x13, 0xe0, 0x28, 0xc1, 0xd0, 0x80, 0xba, 0x24, 0x79, 0x24, 0x62, 0x9f, 0xc6, 0xe1, 0xc4,
-	0xe3, 0xee, 0x4b, 0x92, 0xa7, 0x0a, 0xbc, 0x2f, 0x60, 0x4b, 0xa0, 0xfa, 0x77, 0x50, 0xd3, 0xd4,
-	0xb1, 0xab, 0xac, 0x23, 0x65, 0x27, 0x47, 0x1a, 0xec, 0x2f, 0xcd, 0x88, 0x88, 0x15, 0x73, 0xf9,
-	0x8e, 0x8f, 0xa0, 0x9a, 0xf5, 0x50, 0x90, 0x65, 0xf0, 0x56, 0xf3, 0xbb, 0x70, 0x64, 0xb1, 0x80,
-	0x92, 0x39, 0x97, 0xbe, 0x74, 0xfd, 0x88, 0x9d, 0x11, 0xc7, 0x49, 0x37, 0x70, 0x5b, 0x2b, 0xfa,
-	0x10, 0xb4, 0x3c, 0xb5, 0x24, 0xd9, 0x6b, 0x78, 0x40, 0x66, 0xb3, 0x80, 0xce, 0x08, 0xa3, 0xd3,
-	0x71, 0xd2, 0x13, 0xaf, 0x46, 0x91, 0xab, 0x39, 0x5c, 0x95, 0x13, 0x69, 0xb1, 0x23, 0xfd, 0x12,
-	0x30, 0xd5, 0x18, 0x90, 0x80, 0xc7, 0x62, 0x34, 0x08, 0xc5, 0x25, 0xca, 0xb4, 0xca, 0x67, 0x11,
-	0xd7, 0x76, 0x79, 0xf5, 0x0b, 0x11, 0x0b, 0x4a, 0x16, 0x0e, 0x29, 0x34, 0x0a, 0xf5, 0x9f, 0x4a,
-	0xc6, 0x61, 0x3f, 0x62, 0x1b, 0x81, 0xff, 0xf5, 0xca, 0x7d, 0x80, 0xc6, 0xb2, 0xdf, 0x5f, 0x5a,
-	0xe5, 0x3e, 0x0a, 0xfc, 0xf0, 0x4e, 0xd6, 0x55, 0xb6, 0x23, 0x99, 0x18, 0x6c, 0xc7, 0xbc, 0xed,
-	0x05, 0xd5, 0x7b, 0x70, 0x9c, 0x9b, 0xf0, 0x2f, 0xaf, 0xd7, 0xb3, 0xb7, 0x50, 0xcd, 0x04, 0xc6,
-	0x3a, 0xd4, 0xce, 0xfa, 0x57, 0x03, 0xb3, 0x63, 0x59, 0xa7, 0xef, 0xba, 0x9d, 0xfa, 0x3d, 0xbe,
-	0x08, 0x75, 0xd4, 0x5b, 0xc3, 0x14, 0x04, 0xb8, 0x6f, 0x9e, 0xf6, 0xce, 0xfb, 0x57, 0xf5, 0xbd,
-	0xf6, 0x8f, 0x22, 0x54, 0x87, 0x5c, 0xdd, 0xe2, 0x4b, 0xb0, 0x27, 0x14, 0x5f, 0x41, 0x45, 0xfe,
-	0x40, 0x84, 0x2d, 0x6c, 0xac, 0x4f, 0x97, 0x05, 0x2d, 0x0f, 0xc4, 0x0b, 0xa8, 0x8c, 0x5c, 0x12,
-	0xc4, 0x6d, 0xc7, 0xeb, 0x8c, 0xb5, 0x1f, 0x87, 0xf6, 0x30, 0xbf, 0x98, 0x1c, 0x80, 0x03, 0x8d,
-	0x9c, 0xf3, 0x41, 0x63, 0xa3, 0xe9, 0xc6, 0x4b, 0xa2, 0x3d, 0xdd, 0x81, 0x19, 0xcf, 0x7a, 0xa9,
-	0xa0, 0x0d, 0xb8, 0xfd, 0x45, 0xe0, 0x93, 0x1b, 0x24, 0x36, 0xbf, 0x40, 0xcd, 0xf8, 0x33, 0x31,
-	0x1e, 0x65, 0x88, 0x51, 0xea, 0x45, 0xe4, 0x38, 0xe7, 0x11, 0x4f, 0xfb, 0xf5, 0xbf, 0x65, 0x32,
-	0x14, 0x99, 0x4a, 0x7d, 0x4f, 0x9c, 0xeb, 0x3b, 0x18, 0xf5, 0x3b, 0x00, 0x00, 0xff, 0xff, 0x4c,
-	0x41, 0xfe, 0xb6, 0x89, 0x06, 0x00, 0x00,
+	// 519 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x53, 0x4f, 0x6f, 0xd3, 0x30,
+	0x14, 0x27, 0x5b, 0x4b, 0xd7, 0xd7, 0x2e, 0x8a, 0x5c, 0x4d, 0x64, 0x19, 0xd2, 0xa6, 0x1c, 0x58,
+	0xe0, 0xd0, 0x4d, 0x95, 0x10, 0xa7, 0x09, 0x46, 0xd7, 0x09, 0x24, 0xb6, 0x56, 0xcd, 0x76, 0x8e,
+	0x4c, 0xeb, 0x85, 0x48, 0x6e, 0x62, 0x1c, 0x07, 0x51, 0xbe, 0x15, 0x07, 0x4e, 0x7c, 0x39, 0xec,
+	0x24, 0x2d, 0x49, 0xc9, 0xa0, 0x3d, 0xb0, 0x53, 0x24, 0xbf, 0xdf, 0xbf, 0xf7, 0x27, 0x70, 0x18,
+	0x84, 0x82, 0xf0, 0x88, 0x9d, 0xf8, 0x9c, 0x4d, 0x3c, 0x41, 0x62, 0x11, 0x84, 0xfe, 0x89, 0xfa,
+	0x76, 0x19, 0x8f, 0x44, 0x84, 0xda, 0xaa, 0xd0, 0xcd, 0x0b, 0x76, 0x03, 0xea, 0x83, 0x19, 0x13,
+	0x73, 0xfb, 0x0d, 0x34, 0x46, 0x78, 0x4e, 0x23, 0x3c, 0x45, 0xc7, 0x50, 0x13, 0x73, 0x46, 0x4c,
+	0xed, 0x48, 0x73, 0xf4, 0xde, 0x7e, 0xb7, 0x48, 0xe8, 0xe6, 0xa0, 0x1b, 0x09, 0x40, 0x6d, 0xa8,
+	0x7d, 0x8c, 0xa6, 0x73, 0x73, 0x4b, 0x02, 0xdb, 0xf6, 0x77, 0x0d, 0x76, 0xdd, 0x60, 0xc6, 0x28,
+	0x19, 0x93, 0xcf, 0x89, 0x84, 0xa3, 0x53, 0xd8, 0xe5, 0x24, 0x66, 0x51, 0x18, 0x13, 0x6f, 0x3d,
+	0xc5, 0xbd, 0x02, 0x23, 0x0e, 0xbe, 0x91, 0x54, 0xba, 0x8e, 0x9e, 0x41, 0x83, 0x65, 0x28, 0x73,
+	0x5b, 0x3e, 0xb4, 0x7a, 0x7b, 0x95, 0x12, 0x8a, 0x7e, 0x17, 0x50, 0xea, 0x25, 0x31, 0xe1, 0x21,
+	0x9e, 0x11, 0xb3, 0x26, 0xd1, 0x3b, 0xc8, 0x04, 0x23, 0x7d, 0x8e, 0x70, 0x22, 0x3e, 0x79, 0xf1,
+	0x24, 0x92, 0x51, 0xea, 0xaa, 0x62, 0x7b, 0xa0, 0x2f, 0x22, 0x67, 0xae, 0x45, 0x2b, 0xed, 0x6f,
+	0x56, 0x06, 0xec, 0x2c, 0x5d, 0x54, 0xc8, 0x26, 0xea, 0x40, 0xab, 0x68, 0xa0, 0x82, 0x36, 0xed,
+	0x3e, 0xec, 0xbb, 0x82, 0x13, 0x3c, 0x93, 0xdc, 0xf7, 0x21, 0x4b, 0x44, 0x1f, 0x53, 0xba, 0x98,
+	0xcf, 0x9a, 0x5e, 0xf6, 0x19, 0x58, 0x55, 0x22, 0x79, 0xe2, 0x43, 0x78, 0x82, 0x7d, 0x9f, 0x13,
+	0x1f, 0x0b, 0x32, 0xf5, 0x72, 0xc1, 0x6c, 0x7a, 0x4a, 0xb5, 0x6e, 0xbf, 0x02, 0xb4, 0x00, 0x8f,
+	0x30, 0x97, 0x81, 0xe5, 0x81, 0xc4, 0x6a, 0x79, 0xbf, 0x31, 0x2a, 0x7c, 0x7a, 0x38, 0x5f, 0xb0,
+	0x9a, 0x5e, 0x36, 0x76, 0xfb, 0x87, 0x56, 0x30, 0x1e, 0x26, 0x62, 0x25, 0xfe, 0xe6, 0xeb, 0x3d,
+	0x83, 0xce, 0x92, 0xc1, 0x96, 0x51, 0xa4, 0xdb, 0xb6, 0x6c, 0xfe, 0xa8, 0xcc, 0xab, 0x88, 0xbc,
+	0xe6, 0x19, 0xd8, 0x03, 0x38, 0xa8, 0x8c, 0xbd, 0xd9, 0x8a, 0x5f, 0xbc, 0x86, 0x56, 0x31, 0xbc,
+	0x01, 0xed, 0xfe, 0xf0, 0x6a, 0x34, 0x1e, 0xb8, 0xee, 0xf9, 0xdb, 0x0f, 0x03, 0xe3, 0x11, 0x42,
+	0xa0, 0xdf, 0x5e, 0x97, 0xde, 0x34, 0x04, 0xf0, 0x78, 0x7c, 0x7e, 0x7d, 0x31, 0xbc, 0x32, 0xb6,
+	0x7a, 0x3f, 0x6b, 0xd0, 0xba, 0x91, 0xa2, 0xae, 0x9c, 0x6b, 0x30, 0x21, 0xe8, 0x25, 0x34, 0xd3,
+	0x9f, 0x4d, 0xa5, 0x41, 0x9d, 0xb2, 0x69, 0x5a, 0xb0, 0xaa, 0x1e, 0xd1, 0x25, 0x34, 0x6f, 0x43,
+	0xcc, 0x33, 0xda, 0x41, 0x19, 0x51, 0xfa, 0xe1, 0xac, 0xa7, 0xd5, 0xc5, 0xbc, 0x6f, 0x0a, 0x9d,
+	0x8a, 0xb1, 0x20, 0x67, 0x85, 0x74, 0xef, 0xc2, 0xad, 0xe7, 0x6b, 0x20, 0x33, 0xaf, 0x53, 0x0d,
+	0x05, 0x80, 0xfe, 0x3c, 0x5a, 0x74, 0x7c, 0x8f, 0xc4, 0xea, 0xbf, 0x61, 0x39, 0xff, 0x06, 0x66,
+	0x56, 0x8e, 0xb2, 0xd2, 0x2f, 0x13, 0x4a, 0x2f, 0x12, 0xd9, 0xed, 0xd7, 0xff, 0xd6, 0x93, 0xa3,
+	0xa5, 0x5d, 0xe9, 0xef, 0x30, 0xbd, 0x7b, 0x00, 0xab, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xab,
+	0xf0, 0x12, 0xb8, 0xca, 0x05, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/grpc/naming/naming.go b/go/src/google.golang.org/grpc/naming/naming.go
index 0660560..c2e0871 100644
--- a/go/src/google.golang.org/grpc/naming/naming.go
+++ b/go/src/google.golang.org/grpc/naming/naming.go
@@ -66,7 +66,8 @@
 // Watcher watches for the updates on the specified target.
 type Watcher interface {
 	// Next blocks until an update or error happens. It may return one or more
-	// updates. The first call should get the full set of the results.
+	// updates. The first call should get the full set of the results. It should
+	// return an error if and only if Watcher cannot recover.
 	Next() ([]*Update, error)
 	// Close closes the Watcher.
 	Close()
diff --git a/go/src/google.golang.org/grpc/picker.go b/go/src/google.golang.org/grpc/picker.go
deleted file mode 100644
index 50f315b..0000000
--- a/go/src/google.golang.org/grpc/picker.go
+++ /dev/null
@@ -1,243 +0,0 @@
-/*
- *
- * Copyright 2014, Google Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- *     * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- *     * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-
-package grpc
-
-import (
-	"container/list"
-	"fmt"
-	"sync"
-
-	"golang.org/x/net/context"
-	"google.golang.org/grpc/grpclog"
-	"google.golang.org/grpc/naming"
-	"google.golang.org/grpc/transport"
-)
-
-// Picker picks a Conn for RPC requests.
-// This is EXPERIMENTAL and please do not implement your own Picker for now.
-type Picker interface {
-	// Init does initial processing for the Picker, e.g., initiate some connections.
-	Init(cc *ClientConn) error
-	// Pick blocks until either a transport.ClientTransport is ready for the upcoming RPC
-	// or some error happens.
-	Pick(ctx context.Context) (transport.ClientTransport, error)
-	// PickAddr picks a peer address for connecting. This will be called repeated for
-	// connecting/reconnecting.
-	PickAddr() (string, error)
-	// State returns the connectivity state of the underlying connections.
-	State() (ConnectivityState, error)
-	// WaitForStateChange blocks until the state changes to something other than
-	// the sourceState. It returns the new state or error.
-	WaitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error)
-	// Close closes all the Conn's owned by this Picker.
-	Close() error
-}
-
-// unicastPicker is the default Picker which is used when there is no custom Picker
-// specified by users. It always picks the same Conn.
-type unicastPicker struct {
-	target string
-	conn   *Conn
-}
-
-func (p *unicastPicker) Init(cc *ClientConn) error {
-	c, err := NewConn(cc)
-	if err != nil {
-		return err
-	}
-	p.conn = c
-	return nil
-}
-
-func (p *unicastPicker) Pick(ctx context.Context) (transport.ClientTransport, error) {
-	return p.conn.Wait(ctx)
-}
-
-func (p *unicastPicker) PickAddr() (string, error) {
-	return p.target, nil
-}
-
-func (p *unicastPicker) State() (ConnectivityState, error) {
-	return p.conn.State(), nil
-}
-
-func (p *unicastPicker) WaitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
-	return p.conn.WaitForStateChange(ctx, sourceState)
-}
-
-func (p *unicastPicker) Close() error {
-	if p.conn != nil {
-		return p.conn.Close()
-	}
-	return nil
-}
-
-// unicastNamingPicker picks an address from a name resolver to set up the connection.
-type unicastNamingPicker struct {
-	cc       *ClientConn
-	resolver naming.Resolver
-	watcher  naming.Watcher
-	mu       sync.Mutex
-	// The list of the addresses are obtained from watcher.
-	addrs *list.List
-	// It tracks the current picked addr by PickAddr(). The next PickAddr may
-	// push it forward on addrs.
-	pickedAddr *list.Element
-	conn       *Conn
-}
-
-// NewUnicastNamingPicker creates a Picker to pick addresses from a name resolver
-// to connect.
-func NewUnicastNamingPicker(r naming.Resolver) Picker {
-	return &unicastNamingPicker{
-		resolver: r,
-		addrs:    list.New(),
-	}
-}
-
-type addrInfo struct {
-	addr string
-	// Set to true if this addrInfo needs to be deleted in the next PickAddrr() call.
-	deleting bool
-}
-
-// processUpdates calls Watcher.Next() once and processes the obtained updates.
-func (p *unicastNamingPicker) processUpdates() error {
-	updates, err := p.watcher.Next()
-	if err != nil {
-		return err
-	}
-	for _, update := range updates {
-		switch update.Op {
-		case naming.Add:
-			p.mu.Lock()
-			p.addrs.PushBack(&addrInfo{
-				addr: update.Addr,
-			})
-			p.mu.Unlock()
-			// Initial connection setup
-			if p.conn == nil {
-				conn, err := NewConn(p.cc)
-				if err != nil {
-					return err
-				}
-				p.conn = conn
-			}
-		case naming.Delete:
-			p.mu.Lock()
-			for e := p.addrs.Front(); e != nil; e = e.Next() {
-				if update.Addr == e.Value.(*addrInfo).addr {
-					if e == p.pickedAddr {
-						// Do not remove the element now if it is the current picked
-						// one. We leave the deletion to the next PickAddr() call.
-						e.Value.(*addrInfo).deleting = true
-						// Notify Conn to close it. All the live RPCs on this connection
-						// will be aborted.
-						p.conn.NotifyReset()
-					} else {
-						p.addrs.Remove(e)
-					}
-				}
-			}
-			p.mu.Unlock()
-		default:
-			grpclog.Println("Unknown update.Op ", update.Op)
-		}
-	}
-	return nil
-}
-
-// monitor runs in a standalone goroutine to keep watching name resolution updates until the watcher
-// is closed.
-func (p *unicastNamingPicker) monitor() {
-	for {
-		if err := p.processUpdates(); err != nil {
-			return
-		}
-	}
-}
-
-func (p *unicastNamingPicker) Init(cc *ClientConn) error {
-	w, err := p.resolver.Resolve(cc.target)
-	if err != nil {
-		return err
-	}
-	p.watcher = w
-	p.cc = cc
-	// Get the initial name resolution.
-	if err := p.processUpdates(); err != nil {
-		return err
-	}
-	go p.monitor()
-	return nil
-}
-
-func (p *unicastNamingPicker) Pick(ctx context.Context) (transport.ClientTransport, error) {
-	return p.conn.Wait(ctx)
-}
-
-func (p *unicastNamingPicker) PickAddr() (string, error) {
-	p.mu.Lock()
-	defer p.mu.Unlock()
-	if p.pickedAddr == nil {
-		p.pickedAddr = p.addrs.Front()
-	} else {
-		pa := p.pickedAddr
-		p.pickedAddr = pa.Next()
-		if pa.Value.(*addrInfo).deleting {
-			p.addrs.Remove(pa)
-		}
-		if p.pickedAddr == nil {
-			p.pickedAddr = p.addrs.Front()
-		}
-	}
-	if p.pickedAddr == nil {
-		return "", fmt.Errorf("there is no address available to pick")
-	}
-	return p.pickedAddr.Value.(*addrInfo).addr, nil
-}
-
-func (p *unicastNamingPicker) State() (ConnectivityState, error) {
-	return 0, fmt.Errorf("State() is not supported for unicastNamingPicker")
-}
-
-func (p *unicastNamingPicker) WaitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
-	return 0, fmt.Errorf("WaitForStateChange is not supported for unicastNamingPciker")
-}
-
-func (p *unicastNamingPicker) Close() error {
-	p.watcher.Close()
-	p.conn.Close()
-	return nil
-}
diff --git a/go/src/google.golang.org/grpc/picker_test.go b/go/src/google.golang.org/grpc/picker_test.go
deleted file mode 100644
index dd29497..0000000
--- a/go/src/google.golang.org/grpc/picker_test.go
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- *
- * Copyright 2014, Google Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- *     * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- *     * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-
-package grpc
-
-import (
-	"fmt"
-	"math"
-	"testing"
-	"time"
-
-	"golang.org/x/net/context"
-	"google.golang.org/grpc/naming"
-)
-
-type testWatcher struct {
-	// the channel to receives name resolution updates
-	update chan *naming.Update
-	// the side channel to get to know how many updates in a batch
-	side chan int
-	// the channel to notifiy update injector that the update reading is done
-	readDone chan int
-}
-
-func (w *testWatcher) Next() (updates []*naming.Update, err error) {
-	n := <-w.side
-	if n == 0 {
-		return nil, fmt.Errorf("w.side is closed")
-	}
-	for i := 0; i < n; i++ {
-		u := <-w.update
-		if u != nil {
-			updates = append(updates, u)
-		}
-	}
-	w.readDone <- 0
-	return
-}
-
-func (w *testWatcher) Close() {
-}
-
-func (w *testWatcher) inject(updates []*naming.Update) {
-	w.side <- len(updates)
-	for _, u := range updates {
-		w.update <- u
-	}
-	<-w.readDone
-}
-
-type testNameResolver struct {
-	w    *testWatcher
-	addr string
-}
-
-func (r *testNameResolver) Resolve(target string) (naming.Watcher, error) {
-	r.w = &testWatcher{
-		update:   make(chan *naming.Update, 1),
-		side:     make(chan int, 1),
-		readDone: make(chan int),
-	}
-	r.w.side <- 1
-	r.w.update <- &naming.Update{
-		Op:   naming.Add,
-		Addr: r.addr,
-	}
-	go func() {
-		<-r.w.readDone
-	}()
-	return r.w, nil
-}
-
-func startServers(t *testing.T, numServers, port int, maxStreams uint32) ([]*server, *testNameResolver) {
-	var servers []*server
-	for i := 0; i < numServers; i++ {
-		s := newTestServer()
-		servers = append(servers, s)
-		go s.start(t, port, maxStreams)
-		s.wait(t, 2*time.Second)
-	}
-	// Point to server1
-	addr := "127.0.0.1:" + servers[0].port
-	return servers, &testNameResolver{
-		addr: addr,
-	}
-}
-
-func TestNameDiscovery(t *testing.T) {
-	// Start 3 servers on 3 ports.
-	servers, r := startServers(t, 3, 0, math.MaxUint32)
-	cc, err := Dial("foo.bar.com", WithPicker(NewUnicastNamingPicker(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
-	if err != nil {
-		t.Fatalf("Failed to create ClientConn: %v", err)
-	}
-	var reply string
-	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse {
-		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want <nil>", err)
-	}
-	// Inject name resolution change to point to the second server now.
-	var updates []*naming.Update
-	updates = append(updates, &naming.Update{
-		Op:   naming.Delete,
-		Addr: "127.0.0.1:" + servers[0].port,
-	})
-	updates = append(updates, &naming.Update{
-		Op:   naming.Add,
-		Addr: "127.0.0.1:" + servers[1].port,
-	})
-	r.w.inject(updates)
-	servers[0].stop()
-	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse {
-		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want <nil>", err)
-	}
-	// Add another server address (server#3) to name resolution
-	updates = nil
-	updates = append(updates, &naming.Update{
-		Op:   naming.Add,
-		Addr: "127.0.0.1:" + servers[2].port,
-	})
-	r.w.inject(updates)
-	// Stop server#2. The library should direct to server#3 automatically.
-	servers[1].stop()
-	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse {
-		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want <nil>", err)
-	}
-	cc.Close()
-	servers[2].stop()
-}
-
-func TestEmptyAddrs(t *testing.T) {
-	servers, r := startServers(t, 1, 0, math.MaxUint32)
-	cc, err := Dial("foo.bar.com", WithPicker(NewUnicastNamingPicker(r)), WithBlock(), WithInsecure(), WithCodec(testCodec{}))
-	if err != nil {
-		t.Fatalf("Failed to create ClientConn: %v", err)
-	}
-	var reply string
-	if err := Invoke(context.Background(), "/foo/bar", &expectedRequest, &reply, cc); err != nil || reply != expectedResponse {
-		t.Fatalf("grpc.Invoke(_, _, _, _, _) = %v, want <nil>", err)
-	}
-	// Inject name resolution change to remove the server address so that there is no address
-	// available after that.
-	var updates []*naming.Update
-	updates = append(updates, &naming.Update{
-		Op:   naming.Delete,
-		Addr: "127.0.0.1:" + servers[0].port,
-	})
-	r.w.inject(updates)
-	// Loop until the above updates apply.
-	for {
-		time.Sleep(10 * time.Millisecond)
-		ctx, _ := context.WithTimeout(context.Background(), 10*time.Millisecond)
-		if err := Invoke(ctx, "/foo/bar", &expectedRequest, &reply, cc); err != nil {
-			break
-		}
-	}
-	cc.Close()
-	servers[0].stop()
-}
diff --git a/go/src/google.golang.org/grpc/rpc_util.go b/go/src/google.golang.org/grpc/rpc_util.go
index df3486b..fe3748d 100644
--- a/go/src/google.golang.org/grpc/rpc_util.go
+++ b/go/src/google.golang.org/grpc/rpc_util.go
@@ -41,9 +41,7 @@
 	"io"
 	"io/ioutil"
 	"math"
-	"math/rand"
 	"os"
-	"time"
 
 	"github.com/golang/protobuf/proto"
 	"golang.org/x/net/context"
@@ -63,7 +61,7 @@
 	String() string
 }
 
-// protoCodec is a Codec implemetation with protobuf. It is the default codec for gRPC.
+// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
 type protoCodec struct{}
 
 func (protoCodec) Marshal(v interface{}) ([]byte, error) {
@@ -189,7 +187,7 @@
 	compressionMade
 )
 
-// parser reads complelete gRPC messages from the underlying reader.
+// parser reads complete gRPC messages from the underlying reader.
 type parser struct {
 	// r is the underlying reader.
 	// See the comment on recvMsg for the permissible
@@ -286,14 +284,11 @@
 	switch pf {
 	case compressionNone:
 	case compressionMade:
-		if recvCompress == "" {
-			return transport.StreamErrorf(codes.InvalidArgument, "grpc: invalid grpc-encoding %q with compression enabled", recvCompress)
-		}
 		if dc == nil || recvCompress != dc.Type() {
-			return transport.StreamErrorf(codes.InvalidArgument, "grpc: Decompressor is not installed for grpc-encoding %q", recvCompress)
+			return transport.StreamErrorf(codes.Unimplemented, "grpc: Decompressor is not installed for grpc-encoding %q", recvCompress)
 		}
 	default:
-		return transport.StreamErrorf(codes.InvalidArgument, "grpc: received unexpected payload format %d", pf)
+		return transport.StreamErrorf(codes.Internal, "grpc: received unexpected payload format %d", pf)
 	}
 	return nil
 }
@@ -411,42 +406,10 @@
 	return codes.Unknown
 }
 
-const (
-	// how long to wait after the first failure before retrying
-	baseDelay = 1.0 * time.Second
-	// upper bound of backoff delay
-	maxDelay = 120 * time.Second
-	// backoff increases by this factor on each retry
-	backoffFactor = 1.6
-	// backoff is randomized downwards by this factor
-	backoffJitter = 0.2
-)
-
-func backoff(retries int) (t time.Duration) {
-	if retries == 0 {
-		return baseDelay
-	}
-	backoff, max := float64(baseDelay), float64(maxDelay)
-	for backoff < max && retries > 0 {
-		backoff *= backoffFactor
-		retries--
-	}
-	if backoff > max {
-		backoff = max
-	}
-	// Randomize backoff delays so that if a cluster of requests start at
-	// the same time, they won't operate in lockstep.
-	backoff *= 1 + backoffJitter*(rand.Float64()*2-1)
-	if backoff < 0 {
-		return 0
-	}
-	return time.Duration(backoff)
-}
-
-// SupportPackageIsVersion1 is referenced from generated protocol buffer files
+// SupportPackageIsVersion2 is referenced from generated protocol buffer files
 // to assert that that code is compatible with this version of the grpc package.
 //
 // This constant may be renamed in the future if a change in the generated code
 // requires a synchronised update of grpc-go and protoc-gen-go. This constant
 // should not be referenced from any other code.
-const SupportPackageIsVersion1 = true
+const SupportPackageIsVersion2 = true
diff --git a/go/src/google.golang.org/grpc/server.go b/go/src/google.golang.org/grpc/server.go
index bdf68a0..440fe24 100644
--- a/go/src/google.golang.org/grpc/server.go
+++ b/go/src/google.golang.org/grpc/server.go
@@ -57,7 +57,7 @@
 	"google.golang.org/grpc/transport"
 )
 
-type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error)
+type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error)
 
 // MethodDesc represents an RPC service's method specification.
 type MethodDesc struct {
@@ -73,6 +73,7 @@
 	HandlerType interface{}
 	Methods     []MethodDesc
 	Streams     []StreamDesc
+	Metadata    interface{}
 }
 
 // service consists of the information of the server serving this service and
@@ -99,6 +100,8 @@
 	codec                Codec
 	cp                   Compressor
 	dc                   Decompressor
+	unaryInt             UnaryServerInterceptor
+	streamInt            StreamServerInterceptor
 	maxConcurrentStreams uint32
 	useHandlerImpl       bool // use http.Handler-based server
 }
@@ -113,12 +116,14 @@
 	}
 }
 
+// RPCCompressor returns a ServerOption that sets a compressor for outbound message.
 func RPCCompressor(cp Compressor) ServerOption {
 	return func(o *options) {
 		o.cp = cp
 	}
 }
 
+// RPCDecompressor returns a ServerOption that sets a decompressor for inbound message.
 func RPCDecompressor(dc Decompressor) ServerOption {
 	return func(o *options) {
 		o.dc = dc
@@ -140,6 +145,29 @@
 	}
 }
 
+// UnaryInterceptor returns a ServerOption that sets the UnaryServerInterceptor for the
+// server. Only one unary interceptor can be installed. The construction of multiple
+// interceptors (e.g., chaining) can be implemented at the caller.
+func UnaryInterceptor(i UnaryServerInterceptor) ServerOption {
+	return func(o *options) {
+		if o.unaryInt != nil {
+			panic("The unary server interceptor has been set.")
+		}
+		o.unaryInt = i
+	}
+}
+
+// StreamInterceptor returns a ServerOption that sets the StreamServerInterceptor for the
+// server. Only one stream interceptor can be installed.
+func StreamInterceptor(i StreamServerInterceptor) ServerOption {
+	return func(o *options) {
+		if o.streamInt != nil {
+			panic("The stream server interceptor has been set.")
+		}
+		o.streamInt = i
+	}
+}
+
 // NewServer creates a gRPC server which has no service registered and has not
 // started to accept requests yet.
 func NewServer(opt ...ServerOption) *Server {
@@ -232,12 +260,14 @@
 // Serve accepts incoming connections on the listener lis, creating a new
 // ServerTransport and service goroutine for each. The service goroutines
 // read gRPC requests and then call the registered handlers to reply to them.
-// Service returns when lis.Accept fails.
+// Service returns when lis.Accept fails. lis will be closed when
+// this method returns.
 func (s *Server) Serve(lis net.Listener) error {
 	s.mu.Lock()
 	s.printf("serving")
 	if s.lis == nil {
 		s.mu.Unlock()
+		lis.Close()
 		return ErrServerStopped
 	}
 	s.lis[lis] = true
@@ -435,6 +465,10 @@
 			}
 		}()
 	}
+	if s.opts.cp != nil {
+		// NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686.
+		stream.SetSendCompress(s.opts.cp.Type())
+	}
 	p := &parser{r: stream}
 	for {
 		pf, req, err := p.recvMsg()
@@ -494,7 +528,7 @@
 			}
 			return nil
 		}
-		reply, appErr := md.Handler(srv.server, stream.Context(), df)
+		reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt)
 		if appErr != nil {
 			if err, ok := appErr.(rpcError); ok {
 				statusCode = err.code
@@ -520,9 +554,6 @@
 			Last:  true,
 			Delay: false,
 		}
-		if s.opts.cp != nil {
-			stream.SetSendCompress(s.opts.cp.Type())
-		}
 		if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil {
 			switch err := err.(type) {
 			case transport.ConnectionError:
@@ -572,7 +603,18 @@
 			ss.mu.Unlock()
 		}()
 	}
-	if appErr := sd.Handler(srv.server, ss); appErr != nil {
+	var appErr error
+	if s.opts.streamInt == nil {
+		appErr = sd.Handler(srv.server, ss)
+	} else {
+		info := &StreamServerInfo{
+			FullMethod:     stream.Method(),
+			IsClientStream: sd.ClientStreams,
+			IsServerStream: sd.ServerStreams,
+		}
+		appErr = s.opts.streamInt(srv.server, ss, info, sd.Handler)
+	}
+	if appErr != nil {
 		if err, ok := appErr.(rpcError); ok {
 			ss.statusCode = err.code
 			ss.statusDesc = err.desc
diff --git a/go/src/google.golang.org/grpc/server_test.go b/go/src/google.golang.org/grpc/server_test.go
new file mode 100644
index 0000000..bf23237
--- /dev/null
+++ b/go/src/google.golang.org/grpc/server_test.go
@@ -0,0 +1,61 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package grpc
+
+import (
+	"net"
+	"strings"
+	"testing"
+)
+
+func TestStopBeforeServe(t *testing.T) {
+	lis, err := net.Listen("tcp", "localhost:0")
+	if err != nil {
+		t.Fatalf("failed to create listener: %v", err)
+	}
+
+	server := NewServer()
+	server.Stop()
+	err = server.Serve(lis)
+	if err != ErrServerStopped {
+		t.Fatalf("server.Serve() error = %v, want %v", err, ErrServerStopped)
+	}
+
+	// server.Serve is responsible for closing the listener, even if the
+	// server was already stopped.
+	err = lis.Close()
+	if got, want := ErrorDesc(err), "use of closed network connection"; !strings.Contains(got, want) {
+		t.Errorf("Close() error = %q, want %q", got, want)
+	}
+}
diff --git a/go/src/google.golang.org/grpc/stream.go b/go/src/google.golang.org/grpc/stream.go
index b832078..25be4b8 100644
--- a/go/src/google.golang.org/grpc/stream.go
+++ b/go/src/google.golang.org/grpc/stream.go
@@ -47,12 +47,14 @@
 	"google.golang.org/grpc/transport"
 )
 
-type streamHandler func(srv interface{}, stream ServerStream) error
+// StreamHandler defines the handler called by gRPC server to complete the
+// execution of a streaming RPC.
+type StreamHandler func(srv interface{}, stream ServerStream) error
 
 // StreamDesc represents a streaming RPC service's method specification.
 type StreamDesc struct {
 	StreamName string
-	Handler    streamHandler
+	Handler    StreamHandler
 
 	// At least one of these is true.
 	ServerStreams bool
@@ -77,9 +79,9 @@
 	RecvMsg(m interface{}) error
 }
 
-// ClientStream defines the interface a client stream has to satify.
+// ClientStream defines the interface a client stream has to satisfy.
 type ClientStream interface {
-	// Header returns the header metedata received from the server if there
+	// Header returns the header metadata received from the server if there
 	// is any. It blocks if the metadata is not ready to read.
 	Header() (metadata.MD, error)
 	// Trailer returns the trailer metadata from the server. It must be called
@@ -101,12 +103,16 @@
 	var (
 		t   transport.ClientTransport
 		err error
+		put func()
 	)
-	t, err = cc.dopts.picker.Pick(ctx)
+	// TODO(zhaoq): CallOption is omitted. Add support when it is needed.
+	gopts := BalancerGetOptions{
+		BlockingWait: false,
+	}
+	t, put, err = cc.getTransport(ctx, gopts)
 	if err != nil {
 		return nil, toRPCErr(err)
 	}
-	// TODO(zhaoq): CallOption is omitted. Add support when it is needed.
 	callHdr := &transport.CallHdr{
 		Host:   cc.authority,
 		Method: method,
@@ -117,6 +123,7 @@
 	}
 	cs := &clientStream{
 		desc:    desc,
+		put:     put,
 		codec:   cc.dopts.codec,
 		cp:      cc.dopts.cp,
 		dc:      cc.dopts.dc,
@@ -172,6 +179,7 @@
 	tracing bool // set to EnableTracing when the clientStream is created.
 
 	mu     sync.Mutex
+	put    func()
 	closed bool
 	// trInfo.tr is set when the clientStream is created (if EnableTracing is true),
 	// and is set to nil when the clientStream's finish method is called.
@@ -309,6 +317,10 @@
 	}
 	cs.mu.Lock()
 	defer cs.mu.Unlock()
+	if cs.put != nil {
+		cs.put()
+		cs.put = nil
+	}
 	if cs.trInfo.tr != nil {
 		if err == nil || err == io.EOF {
 			cs.trInfo.tr.LazyPrintf("RPC: [OK]")
diff --git a/go/src/google.golang.org/grpc/stress/client/main.go b/go/src/google.golang.org/grpc/stress/client/main.go
new file mode 100644
index 0000000..bb665e9
--- /dev/null
+++ b/go/src/google.golang.org/grpc/stress/client/main.go
@@ -0,0 +1,298 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+// client starts an interop client to do stress test and a metrics server to report qps.
+package main
+
+import (
+	"flag"
+	"fmt"
+	"math/rand"
+	"net"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/grpclog"
+	"google.golang.org/grpc/interop"
+	testpb "google.golang.org/grpc/interop/grpc_testing"
+	metricspb "google.golang.org/grpc/stress/grpc_testing"
+)
+
+var (
+	serverAddresses      = flag.String("server_addresses", "localhost:8080", "a list of server addresses")
+	testCases            = flag.String("test_cases", "", "a list of test cases along with the relative weights")
+	testDurationSecs     = flag.Int("test_duration_secs", -1, "test duration in seconds")
+	numChannelsPerServer = flag.Int("num_channels_per_server", 1, "Number of channels (i.e connections) to each server")
+	numStubsPerChannel   = flag.Int("num_stubs_per_channel", 1, "Number of client stubs per each connection to server")
+	metricsPort          = flag.Int("metrics_port", 8081, "The port at which the stress client exposes QPS metrics")
+)
+
+// testCaseWithWeight contains the test case type and its weight.
+type testCaseWithWeight struct {
+	name   string
+	weight int
+}
+
+// parseTestCases converts test case string to a list of struct testCaseWithWeight.
+func parseTestCases(testCaseString string) []testCaseWithWeight {
+	testCaseStrings := strings.Split(testCaseString, ",")
+	testCases := make([]testCaseWithWeight, len(testCaseStrings))
+	for i, str := range testCaseStrings {
+		testCase := strings.Split(str, ":")
+		if len(testCase) != 2 {
+			panic(fmt.Sprintf("invalid test case with weight: %s", str))
+		}
+		// Check if test case is supported.
+		switch testCase[0] {
+		case
+			"empty_unary",
+			"large_unary",
+			"client_streaming",
+			"server_streaming",
+			"empty_stream":
+		default:
+			panic(fmt.Sprintf("unknown test type: %s", testCase[0]))
+		}
+		testCases[i].name = testCase[0]
+		w, err := strconv.Atoi(testCase[1])
+		if err != nil {
+			panic(fmt.Sprintf("%v", err))
+		}
+		testCases[i].weight = w
+	}
+	return testCases
+}
+
+// weightedRandomTestSelector defines a weighted random selector for test case types.
+type weightedRandomTestSelector struct {
+	tests       []testCaseWithWeight
+	totalWeight int
+}
+
+// newWeightedRandomTestSelector constructs a weightedRandomTestSelector with the given list of testCaseWithWeight.
+func newWeightedRandomTestSelector(tests []testCaseWithWeight) *weightedRandomTestSelector {
+	var totalWeight int
+	for _, t := range tests {
+		totalWeight += t.weight
+	}
+	rand.Seed(time.Now().UnixNano())
+	return &weightedRandomTestSelector{tests, totalWeight}
+}
+
+func (selector weightedRandomTestSelector) getNextTest() string {
+	random := rand.Intn(selector.totalWeight)
+	var weightSofar int
+	for _, test := range selector.tests {
+		weightSofar += test.weight
+		if random < weightSofar {
+			return test.name
+		}
+	}
+	panic("no test case selected by weightedRandomTestSelector")
+}
+
+// gauge stores the qps of one interop client (one stub).
+type gauge struct {
+	mutex sync.RWMutex
+	val   int64
+}
+
+func (g *gauge) set(v int64) {
+	g.mutex.Lock()
+	defer g.mutex.Unlock()
+	g.val = v
+}
+
+func (g *gauge) get() int64 {
+	g.mutex.RLock()
+	defer g.mutex.RUnlock()
+	return g.val
+}
+
+// server implements metrics server functions.
+type server struct {
+	mutex sync.RWMutex
+	// gauges is a map from /stress_test/server_<n>/channel_<n>/stub_<n>/qps to its qps gauge.
+	gauges map[string]*gauge
+}
+
+// newMetricsServer returns a new metrics server.
+func newMetricsServer() *server {
+	return &server{gauges: make(map[string]*gauge)}
+}
+
+// GetAllGauges returns all gauges.
+func (s *server) GetAllGauges(in *metricspb.EmptyMessage, stream metricspb.MetricsService_GetAllGaugesServer) error {
+	s.mutex.RLock()
+	defer s.mutex.RUnlock()
+
+	for name, gauge := range s.gauges {
+		if err := stream.Send(&metricspb.GaugeResponse{Name: name, Value: &metricspb.GaugeResponse_LongValue{gauge.get()}}); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// GetGauge returns the gauge for the given name.
+func (s *server) GetGauge(ctx context.Context, in *metricspb.GaugeRequest) (*metricspb.GaugeResponse, error) {
+	s.mutex.RLock()
+	defer s.mutex.RUnlock()
+
+	if g, ok := s.gauges[in.Name]; ok {
+		return &metricspb.GaugeResponse{Name: in.Name, Value: &metricspb.GaugeResponse_LongValue{g.get()}}, nil
+	}
+	return nil, grpc.Errorf(codes.InvalidArgument, "gauge with name %s not found", in.Name)
+}
+
+// createGauge creates a guage using the given name in metrics server.
+func (s *server) createGauge(name string) *gauge {
+	s.mutex.Lock()
+	defer s.mutex.Unlock()
+
+	if _, ok := s.gauges[name]; ok {
+		// gauge already exists.
+		panic(fmt.Sprintf("gauge %s already exists", name))
+	}
+	var g gauge
+	s.gauges[name] = &g
+	return &g
+}
+
+func startServer(server *server, port int) {
+	lis, err := net.Listen("tcp", ":"+strconv.Itoa(port))
+	if err != nil {
+		grpclog.Fatalf("failed to listen: %v", err)
+	}
+
+	s := grpc.NewServer()
+	metricspb.RegisterMetricsServiceServer(s, server)
+	s.Serve(lis)
+
+}
+
+// performRPCs uses weightedRandomTestSelector to select test case and runs the tests.
+func performRPCs(gauge *gauge, conn *grpc.ClientConn, selector *weightedRandomTestSelector, stop <-chan bool) {
+	client := testpb.NewTestServiceClient(conn)
+	var numCalls int64
+	startTime := time.Now()
+	for {
+		done := make(chan bool, 1)
+		go func() {
+			test := selector.getNextTest()
+			switch test {
+			case "empty_unary":
+				interop.DoEmptyUnaryCall(client)
+			case "large_unary":
+				interop.DoLargeUnaryCall(client)
+			case "client_streaming":
+				interop.DoClientStreaming(client)
+			case "server_streaming":
+				interop.DoServerStreaming(client)
+			case "empty_stream":
+				interop.DoEmptyStream(client)
+			}
+			done <- true
+		}()
+		select {
+		case <-stop:
+			return
+		case <-done:
+			numCalls++
+			gauge.set(int64(float64(numCalls) / time.Since(startTime).Seconds()))
+		}
+	}
+}
+
+func logParameterInfo(addresses []string, tests []testCaseWithWeight) {
+	grpclog.Printf("server_addresses: %s", *serverAddresses)
+	grpclog.Printf("test_cases: %s", *testCases)
+	grpclog.Printf("test_duration-secs: %d", *testDurationSecs)
+	grpclog.Printf("num_channels_per_server: %d", *numChannelsPerServer)
+	grpclog.Printf("num_stubs_per_channel: %d", *numStubsPerChannel)
+	grpclog.Printf("metrics_port: %d", *metricsPort)
+
+	grpclog.Println("addresses:")
+	for i, addr := range addresses {
+		grpclog.Printf("%d. %s\n", i+1, addr)
+	}
+	grpclog.Println("tests:")
+	for i, test := range tests {
+		grpclog.Printf("%d. %v\n", i+1, test)
+	}
+}
+
+func main() {
+	flag.Parse()
+	addresses := strings.Split(*serverAddresses, ",")
+	tests := parseTestCases(*testCases)
+	logParameterInfo(addresses, tests)
+	testSelector := newWeightedRandomTestSelector(tests)
+	metricsServer := newMetricsServer()
+
+	var wg sync.WaitGroup
+	wg.Add(len(addresses) * *numChannelsPerServer * *numStubsPerChannel)
+	stop := make(chan bool)
+
+	for serverIndex, address := range addresses {
+		for connIndex := 0; connIndex < *numChannelsPerServer; connIndex++ {
+			conn, err := grpc.Dial(address, grpc.WithInsecure())
+			if err != nil {
+				grpclog.Fatalf("Fail to dial: %v", err)
+			}
+			defer conn.Close()
+			for clientIndex := 0; clientIndex < *numStubsPerChannel; clientIndex++ {
+				name := fmt.Sprintf("/stress_test/server_%d/channel_%d/stub_%d/qps", serverIndex+1, connIndex+1, clientIndex+1)
+				go func() {
+					defer wg.Done()
+					g := metricsServer.createGauge(name)
+					performRPCs(g, conn, testSelector, stop)
+				}()
+			}
+
+		}
+	}
+	go startServer(metricsServer, *metricsPort)
+	if *testDurationSecs > 0 {
+		time.Sleep(time.Duration(*testDurationSecs) * time.Second)
+		close(stop)
+	}
+	wg.Wait()
+	grpclog.Printf(" ===== ALL DONE ===== ")
+
+}
diff --git a/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.pb.go b/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.pb.go
new file mode 100644
index 0000000..0353c5f
--- /dev/null
+++ b/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.pb.go
@@ -0,0 +1,356 @@
+// Code generated by protoc-gen-go.
+// source: stress/grpc_testing/metrics.proto
+// DO NOT EDIT!
+
+/*
+Package grpc_testing is a generated protocol buffer package.
+
+It is generated from these files:
+	stress/grpc_testing/metrics.proto
+
+It has these top-level messages:
+	GaugeResponse
+	GaugeRequest
+	EmptyMessage
+*/
+package grpc_testing
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+import (
+	context "golang.org/x/net/context"
+	grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+const _ = proto.ProtoPackageIsVersion1
+
+// Reponse message containing the gauge name and value
+type GaugeResponse struct {
+	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+	// Types that are valid to be assigned to Value:
+	//	*GaugeResponse_LongValue
+	//	*GaugeResponse_DoubleValue
+	//	*GaugeResponse_StringValue
+	Value isGaugeResponse_Value `protobuf_oneof:"value"`
+}
+
+func (m *GaugeResponse) Reset()                    { *m = GaugeResponse{} }
+func (m *GaugeResponse) String() string            { return proto.CompactTextString(m) }
+func (*GaugeResponse) ProtoMessage()               {}
+func (*GaugeResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+type isGaugeResponse_Value interface {
+	isGaugeResponse_Value()
+}
+
+type GaugeResponse_LongValue struct {
+	LongValue int64 `protobuf:"varint,2,opt,name=long_value,oneof"`
+}
+type GaugeResponse_DoubleValue struct {
+	DoubleValue float64 `protobuf:"fixed64,3,opt,name=double_value,oneof"`
+}
+type GaugeResponse_StringValue struct {
+	StringValue string `protobuf:"bytes,4,opt,name=string_value,oneof"`
+}
+
+func (*GaugeResponse_LongValue) isGaugeResponse_Value()   {}
+func (*GaugeResponse_DoubleValue) isGaugeResponse_Value() {}
+func (*GaugeResponse_StringValue) isGaugeResponse_Value() {}
+
+func (m *GaugeResponse) GetValue() isGaugeResponse_Value {
+	if m != nil {
+		return m.Value
+	}
+	return nil
+}
+
+func (m *GaugeResponse) GetLongValue() int64 {
+	if x, ok := m.GetValue().(*GaugeResponse_LongValue); ok {
+		return x.LongValue
+	}
+	return 0
+}
+
+func (m *GaugeResponse) GetDoubleValue() float64 {
+	if x, ok := m.GetValue().(*GaugeResponse_DoubleValue); ok {
+		return x.DoubleValue
+	}
+	return 0
+}
+
+func (m *GaugeResponse) GetStringValue() string {
+	if x, ok := m.GetValue().(*GaugeResponse_StringValue); ok {
+		return x.StringValue
+	}
+	return ""
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*GaugeResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+	return _GaugeResponse_OneofMarshaler, _GaugeResponse_OneofUnmarshaler, _GaugeResponse_OneofSizer, []interface{}{
+		(*GaugeResponse_LongValue)(nil),
+		(*GaugeResponse_DoubleValue)(nil),
+		(*GaugeResponse_StringValue)(nil),
+	}
+}
+
+func _GaugeResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+	m := msg.(*GaugeResponse)
+	// value
+	switch x := m.Value.(type) {
+	case *GaugeResponse_LongValue:
+		b.EncodeVarint(2<<3 | proto.WireVarint)
+		b.EncodeVarint(uint64(x.LongValue))
+	case *GaugeResponse_DoubleValue:
+		b.EncodeVarint(3<<3 | proto.WireFixed64)
+		b.EncodeFixed64(math.Float64bits(x.DoubleValue))
+	case *GaugeResponse_StringValue:
+		b.EncodeVarint(4<<3 | proto.WireBytes)
+		b.EncodeStringBytes(x.StringValue)
+	case nil:
+	default:
+		return fmt.Errorf("GaugeResponse.Value has unexpected type %T", x)
+	}
+	return nil
+}
+
+func _GaugeResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+	m := msg.(*GaugeResponse)
+	switch tag {
+	case 2: // value.long_value
+		if wire != proto.WireVarint {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeVarint()
+		m.Value = &GaugeResponse_LongValue{int64(x)}
+		return true, err
+	case 3: // value.double_value
+		if wire != proto.WireFixed64 {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeFixed64()
+		m.Value = &GaugeResponse_DoubleValue{math.Float64frombits(x)}
+		return true, err
+	case 4: // value.string_value
+		if wire != proto.WireBytes {
+			return true, proto.ErrInternalBadWireType
+		}
+		x, err := b.DecodeStringBytes()
+		m.Value = &GaugeResponse_StringValue{x}
+		return true, err
+	default:
+		return false, nil
+	}
+}
+
+func _GaugeResponse_OneofSizer(msg proto.Message) (n int) {
+	m := msg.(*GaugeResponse)
+	// value
+	switch x := m.Value.(type) {
+	case *GaugeResponse_LongValue:
+		n += proto.SizeVarint(2<<3 | proto.WireVarint)
+		n += proto.SizeVarint(uint64(x.LongValue))
+	case *GaugeResponse_DoubleValue:
+		n += proto.SizeVarint(3<<3 | proto.WireFixed64)
+		n += 8
+	case *GaugeResponse_StringValue:
+		n += proto.SizeVarint(4<<3 | proto.WireBytes)
+		n += proto.SizeVarint(uint64(len(x.StringValue)))
+		n += len(x.StringValue)
+	case nil:
+	default:
+		panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+	}
+	return n
+}
+
+// Request message containing the gauge name
+type GaugeRequest struct {
+	Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+}
+
+func (m *GaugeRequest) Reset()                    { *m = GaugeRequest{} }
+func (m *GaugeRequest) String() string            { return proto.CompactTextString(m) }
+func (*GaugeRequest) ProtoMessage()               {}
+func (*GaugeRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+type EmptyMessage struct {
+}
+
+func (m *EmptyMessage) Reset()                    { *m = EmptyMessage{} }
+func (m *EmptyMessage) String() string            { return proto.CompactTextString(m) }
+func (*EmptyMessage) ProtoMessage()               {}
+func (*EmptyMessage) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+
+func init() {
+	proto.RegisterType((*GaugeResponse)(nil), "grpc.testing.GaugeResponse")
+	proto.RegisterType((*GaugeRequest)(nil), "grpc.testing.GaugeRequest")
+	proto.RegisterType((*EmptyMessage)(nil), "grpc.testing.EmptyMessage")
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
+// Client API for MetricsService service
+
+type MetricsServiceClient interface {
+	// Returns the values of all the gauges that are currently being maintained by
+	// the service
+	GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error)
+	// Returns the value of one gauge
+	GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error)
+}
+
+type metricsServiceClient struct {
+	cc *grpc.ClientConn
+}
+
+func NewMetricsServiceClient(cc *grpc.ClientConn) MetricsServiceClient {
+	return &metricsServiceClient{cc}
+}
+
+func (c *metricsServiceClient) GetAllGauges(ctx context.Context, in *EmptyMessage, opts ...grpc.CallOption) (MetricsService_GetAllGaugesClient, error) {
+	stream, err := grpc.NewClientStream(ctx, &_MetricsService_serviceDesc.Streams[0], c.cc, "/grpc.testing.MetricsService/GetAllGauges", opts...)
+	if err != nil {
+		return nil, err
+	}
+	x := &metricsServiceGetAllGaugesClient{stream}
+	if err := x.ClientStream.SendMsg(in); err != nil {
+		return nil, err
+	}
+	if err := x.ClientStream.CloseSend(); err != nil {
+		return nil, err
+	}
+	return x, nil
+}
+
+type MetricsService_GetAllGaugesClient interface {
+	Recv() (*GaugeResponse, error)
+	grpc.ClientStream
+}
+
+type metricsServiceGetAllGaugesClient struct {
+	grpc.ClientStream
+}
+
+func (x *metricsServiceGetAllGaugesClient) Recv() (*GaugeResponse, error) {
+	m := new(GaugeResponse)
+	if err := x.ClientStream.RecvMsg(m); err != nil {
+		return nil, err
+	}
+	return m, nil
+}
+
+func (c *metricsServiceClient) GetGauge(ctx context.Context, in *GaugeRequest, opts ...grpc.CallOption) (*GaugeResponse, error) {
+	out := new(GaugeResponse)
+	err := grpc.Invoke(ctx, "/grpc.testing.MetricsService/GetGauge", in, out, c.cc, opts...)
+	if err != nil {
+		return nil, err
+	}
+	return out, nil
+}
+
+// Server API for MetricsService service
+
+type MetricsServiceServer interface {
+	// Returns the values of all the gauges that are currently being maintained by
+	// the service
+	GetAllGauges(*EmptyMessage, MetricsService_GetAllGaugesServer) error
+	// Returns the value of one gauge
+	GetGauge(context.Context, *GaugeRequest) (*GaugeResponse, error)
+}
+
+func RegisterMetricsServiceServer(s *grpc.Server, srv MetricsServiceServer) {
+	s.RegisterService(&_MetricsService_serviceDesc, srv)
+}
+
+func _MetricsService_GetAllGauges_Handler(srv interface{}, stream grpc.ServerStream) error {
+	m := new(EmptyMessage)
+	if err := stream.RecvMsg(m); err != nil {
+		return err
+	}
+	return srv.(MetricsServiceServer).GetAllGauges(m, &metricsServiceGetAllGaugesServer{stream})
+}
+
+type MetricsService_GetAllGaugesServer interface {
+	Send(*GaugeResponse) error
+	grpc.ServerStream
+}
+
+type metricsServiceGetAllGaugesServer struct {
+	grpc.ServerStream
+}
+
+func (x *metricsServiceGetAllGaugesServer) Send(m *GaugeResponse) error {
+	return x.ServerStream.SendMsg(m)
+}
+
+func _MetricsService_GetGauge_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+	in := new(GaugeRequest)
+	if err := dec(in); err != nil {
+		return nil, err
+	}
+	if interceptor == nil {
+		return srv.(MetricsServiceServer).GetGauge(ctx, in)
+	}
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.MetricsService/GetGauge",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(MetricsServiceServer).GetGauge(ctx, req.(*GaugeRequest))
+	}
+	return interceptor(ctx, in, info, handler)
+}
+
+var _MetricsService_serviceDesc = grpc.ServiceDesc{
+	ServiceName: "grpc.testing.MetricsService",
+	HandlerType: (*MetricsServiceServer)(nil),
+	Methods: []grpc.MethodDesc{
+		{
+			MethodName: "GetGauge",
+			Handler:    _MetricsService_GetGauge_Handler,
+		},
+	},
+	Streams: []grpc.StreamDesc{
+		{
+			StreamName:    "GetAllGauges",
+			Handler:       _MetricsService_GetAllGauges_Handler,
+			ServerStreams: true,
+		},
+	},
+}
+
+var fileDescriptor0 = []byte{
+	// 242 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x7c, 0x90, 0xbd, 0x4e, 0xc3, 0x30,
+	0x14, 0x85, 0x6b, 0x5a, 0xfe, 0xae, 0x4c, 0x07, 0x0b, 0xa1, 0xaa, 0x30, 0x40, 0x26, 0xa6, 0x14,
+	0xc1, 0x13, 0x00, 0x42, 0x94, 0xa1, 0x0b, 0x3c, 0x40, 0x95, 0x86, 0x2b, 0xcb, 0x92, 0x63, 0x1b,
+	0x5f, 0xbb, 0x12, 0x6f, 0xc3, 0xa3, 0xe2, 0x3a, 0x11, 0x4a, 0x18, 0xba, 0x7e, 0x37, 0xf9, 0xce,
+	0x39, 0x86, 0x1b, 0x0a, 0x1e, 0x89, 0x16, 0xd2, 0xbb, 0x7a, 0x1d, 0x90, 0x82, 0x32, 0x72, 0xd1,
+	0x60, 0xf0, 0xaa, 0xa6, 0xd2, 0x79, 0x1b, 0xac, 0xe0, 0xbb, 0x5b, 0xd9, 0xdd, 0x0a, 0x0d, 0x67,
+	0xaf, 0x55, 0x94, 0xf8, 0x8e, 0xe4, 0xac, 0x21, 0x14, 0x1c, 0x26, 0xa6, 0x6a, 0x70, 0xc6, 0xae,
+	0xd9, 0xed, 0xa9, 0x38, 0x07, 0xd0, 0xd6, 0xc8, 0xf5, 0xb6, 0xd2, 0x11, 0x67, 0x07, 0x89, 0x8d,
+	0x97, 0x23, 0x71, 0x01, 0xfc, 0xd3, 0xc6, 0x8d, 0xc6, 0x8e, 0x8f, 0x13, 0x67, 0x2d, 0x4f, 0xf9,
+	0xea, 0xef, 0xfb, 0xc9, 0xce, 0xb1, 0x1c, 0x3d, 0x1d, 0xc3, 0x61, 0x06, 0xc5, 0x15, 0xf0, 0x2e,
+	0xed, 0x2b, 0xa6, 0x02, 0xc3, 0xb0, 0x62, 0x0a, 0xfc, 0xa5, 0x71, 0xe1, 0x7b, 0x95, 0x16, 0x54,
+	0x12, 0xef, 0x7f, 0x18, 0x4c, 0x57, 0x6d, 0xf7, 0x0f, 0xf4, 0x5b, 0x55, 0xa3, 0x78, 0x4b, 0x02,
+	0x0c, 0x8f, 0x5a, 0x67, 0x0d, 0x89, 0x79, 0xd9, 0x5f, 0x53, 0xf6, 0x7f, 0x9f, 0x5f, 0x0e, 0x6f,
+	0x83, 0x99, 0x77, 0x4c, 0x3c, 0xc3, 0x49, 0x52, 0x65, 0xfa, 0x5f, 0xd3, 0xef, 0xb8, 0x57, 0xb3,
+	0x39, 0xca, 0x6f, 0xfa, 0xf0, 0x1b, 0x00, 0x00, 0xff, 0xff, 0xbc, 0xb7, 0x83, 0x53, 0x78, 0x01,
+	0x00, 0x00,
+}
diff --git a/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.proto b/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.proto
new file mode 100644
index 0000000..1202b20
--- /dev/null
+++ b/go/src/google.golang.org/grpc/stress/grpc_testing/metrics.proto
@@ -0,0 +1,64 @@
+// Copyright 2015-2016, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+//     * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+//     * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+//     * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Contains the definitions for a metrics service and the type of metrics
+// exposed by the service.
+//
+// Currently, 'Gauge' (i.e a metric that represents the measured value of
+// something at an instant of time) is the only metric type supported by the
+// service.
+syntax = "proto3";
+
+package grpc.testing;
+
+// Reponse message containing the gauge name and value
+message GaugeResponse {
+  string name = 1;
+  oneof value {
+    int64 long_value = 2;
+    double double_value = 3;
+    string string_value = 4;
+  }
+}
+
+// Request message containing the gauge name
+message GaugeRequest {
+  string name = 1;
+}
+
+message EmptyMessage {}
+
+service MetricsService {
+  // Returns the values of all the gauges that are currently being maintained by
+  // the service
+  rpc GetAllGauges(EmptyMessage) returns (stream GaugeResponse);
+
+  // Returns the value of one gauge
+  rpc GetGauge(GaugeRequest) returns (GaugeResponse);
+}
diff --git a/go/src/google.golang.org/grpc/stress/metrics_client/main.go b/go/src/google.golang.org/grpc/stress/metrics_client/main.go
new file mode 100644
index 0000000..983a8ff
--- /dev/null
+++ b/go/src/google.golang.org/grpc/stress/metrics_client/main.go
@@ -0,0 +1,97 @@
+/*
+ *
+ * Copyright 2016, Google Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other materials provided with the
+ * distribution.
+ *     * Neither the name of Google Inc. nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+package main
+
+import (
+	"flag"
+	"fmt"
+	"io"
+
+	"golang.org/x/net/context"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/grpclog"
+	metricspb "google.golang.org/grpc/stress/grpc_testing"
+)
+
+var (
+	metricsServerAddress = flag.String("metrics_server_address", "", "The metrics server addresses in the fomrat <hostname>:<port>")
+	totalOnly            = flag.Bool("total_only", false, "If true, this prints only the total value of all gauges")
+)
+
+func printMetrics(client metricspb.MetricsServiceClient, totalOnly bool) {
+	stream, err := client.GetAllGauges(context.Background(), &metricspb.EmptyMessage{})
+	if err != nil {
+		grpclog.Fatalf("failed to call GetAllGuages: %v", err)
+	}
+
+	var (
+		overallQPS int64
+		rpcStatus  error
+	)
+	for {
+		gaugeResponse, err := stream.Recv()
+		if err != nil {
+			rpcStatus = err
+			break
+		}
+		if _, ok := gaugeResponse.GetValue().(*metricspb.GaugeResponse_LongValue); !ok {
+			panic(fmt.Sprintf("gauge %s is not a long value", gaugeResponse.Name))
+		}
+		v := gaugeResponse.GetLongValue()
+		if !totalOnly {
+			grpclog.Printf("%s: %d", gaugeResponse.Name, v)
+		}
+		overallQPS += v
+	}
+	if rpcStatus != io.EOF {
+		grpclog.Fatalf("failed to finish server streaming: %v", rpcStatus)
+	}
+	grpclog.Printf("overall qps: %d", overallQPS)
+}
+
+func main() {
+	flag.Parse()
+	if *metricsServerAddress == "" {
+		grpclog.Fatalf("Metrics server address is empty.")
+	}
+
+	conn, err := grpc.Dial(*metricsServerAddress, grpc.WithInsecure())
+	if err != nil {
+		grpclog.Fatalf("cannot connect to metrics server: %v", err)
+	}
+	defer conn.Close()
+
+	c := metricspb.NewMetricsServiceClient(conn)
+	printMetrics(c, *totalOnly)
+}
diff --git a/go/src/google.golang.org/grpc/test/end2end_test.go b/go/src/google.golang.org/grpc/test/end2end_test.go
index 86418dc..b539584 100644
--- a/go/src/google.golang.org/grpc/test/end2end_test.go
+++ b/go/src/google.golang.org/grpc/test/end2end_test.go
@@ -130,8 +130,11 @@
 func (s *testServer) UnaryCall(ctx context.Context, in *testpb.SimpleRequest) (*testpb.SimpleResponse, error) {
 	md, ok := metadata.FromContext(ctx)
 	if ok {
+		if _, exists := md[":authority"]; !exists {
+			return nil, grpc.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md)
+		}
 		if err := grpc.SendHeader(ctx, md); err != nil {
-			return nil, fmt.Errorf("grpc.SendHeader(%v, %v) = %v, want %v", ctx, md, err, nil)
+			return nil, fmt.Errorf("grpc.SendHeader(_, %v) = %v, want %v", md, err, nil)
 		}
 		grpc.SetTrailer(ctx, testTrailerMetadata)
 	}
@@ -159,7 +162,6 @@
 			return nil, fmt.Errorf("Unknown server name %q", serverName)
 		}
 	}
-
 	// Simulate some service delay.
 	time.Sleep(time.Second)
 
@@ -167,6 +169,7 @@
 	if err != nil {
 		return nil, err
 	}
+
 	return &testpb.SimpleResponse{
 		Payload: payload,
 	}, nil
@@ -174,8 +177,11 @@
 
 func (s *testServer) StreamingOutputCall(args *testpb.StreamingOutputCallRequest, stream testpb.TestService_StreamingOutputCallServer) error {
 	if md, ok := metadata.FromContext(stream.Context()); ok {
-		// For testing purpose, returns an error if there is attached metadata.
-		if len(md) > 0 {
+		if _, exists := md[":authority"]; !exists {
+			return grpc.Errorf(codes.DataLoss, "expected an :authority metadata: %v", md)
+		}
+		// For testing purpose, returns an error if there is attached metadata except for authority.
+		if len(md) > 1 {
 			return grpc.Errorf(codes.DataLoss, "got extra metadata")
 		}
 	}
@@ -290,60 +296,6 @@
 
 const tlsDir = "testdata/"
 
-func TestReconnectTimeout(t *testing.T) {
-	defer leakCheck(t)()
-	restore := declareLogNoise(t,
-		"transport: http2Client.notifyError got notified that the client transport was broken",
-		"grpc: Conn.resetTransport failed to create client transport: connection error: desc = \"transport",
-		"grpc: Conn.transportMonitor exits due to: grpc: timed out trying to connect",
-	)
-	defer restore()
-
-	lis, err := net.Listen("tcp", "localhost:0")
-	if err != nil {
-		t.Fatalf("Failed to listen: %v", err)
-	}
-	_, port, err := net.SplitHostPort(lis.Addr().String())
-	if err != nil {
-		t.Fatalf("Failed to parse listener address: %v", err)
-	}
-	addr := "localhost:" + port
-	conn, err := grpc.Dial(addr, grpc.WithTimeout(5*time.Second), grpc.WithBlock(), grpc.WithInsecure())
-	if err != nil {
-		t.Fatalf("Failed to dial to the server %q: %v", addr, err)
-	}
-	// Close unaccepted connection (i.e., conn).
-	lis.Close()
-	tc := testpb.NewTestServiceClient(conn)
-	waitC := make(chan struct{})
-	go func() {
-		defer close(waitC)
-		const argSize = 271828
-		const respSize = 314159
-
-		payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, argSize)
-		if err != nil {
-			t.Error(err)
-			return
-		}
-
-		req := &testpb.SimpleRequest{
-			ResponseType: testpb.PayloadType_COMPRESSABLE.Enum(),
-			ResponseSize: proto.Int32(respSize),
-			Payload:      payload,
-		}
-		if _, err := tc.UnaryCall(context.Background(), req); err == nil {
-			t.Errorf("TestService/UnaryCall(_, _) = _, <nil>, want _, non-nil")
-			return
-		}
-	}()
-	// Block until reconnect times out.
-	<-waitC
-	if err := conn.Close(); err != grpc.ErrClientConnClosing {
-		t.Fatalf("%v.Close() = %v, want %v", conn, err, grpc.ErrClientConnClosing)
-	}
-}
-
 func unixDialer(addr string, timeout time.Duration) (net.Conn, error) {
 	return net.DialTimeout("unix", addr, timeout)
 }
@@ -420,6 +372,8 @@
 	userAgent         string
 	clientCompression bool
 	serverCompression bool
+	unaryInt          grpc.UnaryServerInterceptor
+	streamInt         grpc.StreamServerInterceptor
 
 	// srv and srvAddr are set once startServer is called.
 	srv     *grpc.Server
@@ -432,14 +386,17 @@
 func (te *test) tearDown() {
 	if te.cancel != nil {
 		te.cancel()
+		te.cancel = nil
 	}
-	te.srv.Stop()
 	if te.cc != nil {
 		te.cc.Close()
+		te.cc = nil
 	}
 	if te.restoreLogs != nil {
 		te.restoreLogs()
+		te.restoreLogs = nil
 	}
+	te.srv.Stop()
 }
 
 // newTest returns a new test using the provided testing.T and
@@ -468,7 +425,12 @@
 			grpc.RPCDecompressor(grpc.NewGZIPDecompressor()),
 		)
 	}
-
+	if te.unaryInt != nil {
+		sopts = append(sopts, grpc.UnaryInterceptor(te.unaryInt))
+	}
+	if te.streamInt != nil {
+		sopts = append(sopts, grpc.StreamInterceptor(te.streamInt))
+	}
 	la := "localhost:0"
 	switch e.network {
 	case "unix":
@@ -576,6 +538,7 @@
 
 func testTimeoutOnDeadServer(t *testing.T, e env) {
 	te := newTest(t, e)
+	te.userAgent = testAppUA
 	te.declareLogNoise(
 		"transport: http2Client.notifyError got notified that the client transport was broken EOF",
 		"grpc: Conn.transportMonitor exits due to: grpc: the client connection is closing",
@@ -587,37 +550,17 @@
 
 	cc := te.clientConn()
 	tc := testpb.NewTestServiceClient(cc)
-	ctx, _ := context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Idle); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Idle, err)
-	}
-	ctx, _ = context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Connecting); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Connecting, err)
-	}
-	if state, err := cc.State(); err != nil || state != grpc.Ready {
-		t.Fatalf("cc.State() = %s, %v, want %s, <nil>", state, err, grpc.Ready)
-	}
-	ctx, _ = context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Ready); err != context.DeadlineExceeded {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, %v", grpc.Ready, err, context.DeadlineExceeded)
+	if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); err != nil {
+		t.Fatalf("TestService/EmptyCall(_, _) = _, %v, want _, <nil>", err)
 	}
 	te.srv.Stop()
 	// Set -1 as the timeout to make sure if transportMonitor gets error
 	// notification in time the failure path of the 1st invoke of
 	// ClientConn.wait hits the deadline exceeded error.
-	ctx, _ = context.WithTimeout(context.Background(), -1)
+	ctx, _ := context.WithTimeout(context.Background(), -1)
 	if _, err := tc.EmptyCall(ctx, &testpb.Empty{}); grpc.Code(err) != codes.DeadlineExceeded {
-		t.Fatalf("TestService/EmptyCall(%v, _) = _, error %v, want _, error code: %d", ctx, err, codes.DeadlineExceeded)
+		t.Fatalf("TestService/EmptyCall(%v, _) = _, %v, want _, error code: %d", ctx, err, codes.DeadlineExceeded)
 	}
-	ctx, _ = context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Ready); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Ready, err)
-	}
-	if state, err := cc.State(); err != nil || (state != grpc.Connecting && state != grpc.TransientFailure) {
-		t.Fatalf("cc.State() = %s, %v, want %s or %s, <nil>", state, err, grpc.Connecting, grpc.TransientFailure)
-	}
-	cc.Close()
 	awaitNewConnLogOutput()
 }
 
@@ -681,6 +624,10 @@
 func TestHealthCheckOff(t *testing.T) {
 	defer leakCheck(t)()
 	for _, e := range listTestEnv() {
+		// TODO(bradfitz): Temporarily skip this env due to #619.
+		if e.name == "handler-tls" {
+			continue
+		}
 		testHealthCheckOff(t, e)
 	}
 }
@@ -739,6 +686,24 @@
 
 }
 
+func TestErrorChanNoIO(t *testing.T) {
+	defer leakCheck(t)()
+	for _, e := range listTestEnv() {
+		testErrorChanNoIO(t, e)
+	}
+}
+
+func testErrorChanNoIO(t *testing.T, e env) {
+	te := newTest(t, e)
+	te.startServer()
+	defer te.tearDown()
+
+	tc := testpb.NewTestServiceClient(te.clientConn())
+	if _, err := tc.FullDuplexCall(context.Background()); err != nil {
+		t.Fatalf("%v.FullDuplexCall(_) = _, %v, want <nil>", tc, err)
+	}
+}
+
 func TestEmptyUnaryWithUserAgent(t *testing.T) {
 	defer leakCheck(t)()
 	for _, e := range listTestEnv() {
@@ -753,23 +718,6 @@
 	defer te.tearDown()
 
 	cc := te.clientConn()
-
-	// Wait until cc is connected.
-	ctx, _ := context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Idle); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Idle, err)
-	}
-	ctx, _ = context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Connecting); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Connecting, err)
-	}
-	if state, err := cc.State(); err != nil || state != grpc.Ready {
-		t.Fatalf("cc.State() = %s, %v, want %s, <nil>", state, err, grpc.Ready)
-	}
-	ctx, _ = context.WithTimeout(context.Background(), time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Ready); err == nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, <nil>, want _, %v", grpc.Ready, context.DeadlineExceeded)
-	}
 	tc := testpb.NewTestServiceClient(cc)
 	var header metadata.MD
 	reply, err := tc.EmptyCall(context.Background(), &testpb.Empty{}, grpc.Header(&header))
@@ -781,15 +729,6 @@
 	}
 
 	te.srv.Stop()
-	cc.Close()
-
-	ctx, _ = context.WithTimeout(context.Background(), 5*time.Second)
-	if _, err := cc.WaitForStateChange(ctx, grpc.Ready); err != nil {
-		t.Fatalf("cc.WaitForStateChange(_, %s) = _, %v, want _, <nil>", grpc.Ready, err)
-	}
-	if state, err := cc.State(); err != nil || state != grpc.Shutdown {
-		t.Fatalf("cc.State() = %s, %v, want %s, <nil>", state, err, grpc.Shutdown)
-	}
 }
 
 func TestFailedEmptyUnary(t *testing.T) {
@@ -971,7 +910,6 @@
 
 	cc := te.clientConn()
 	tc := testpb.NewTestServiceClient(cc)
-
 	var wg sync.WaitGroup
 
 	numRPC := 1000
@@ -1037,9 +975,8 @@
 	}
 	for i := -1; i <= 10; i++ {
 		ctx, _ := context.WithTimeout(context.Background(), time.Duration(i)*time.Millisecond)
-		reply, err := tc.UnaryCall(ctx, req)
-		if grpc.Code(err) != codes.DeadlineExceeded {
-			t.Fatalf(`TestService/UnaryCallv(_, _) = %v, %v; want <nil>, error code: %d`, reply, err, codes.DeadlineExceeded)
+		if _, err := tc.UnaryCall(ctx, req); grpc.Code(err) != codes.DeadlineExceeded {
+			t.Fatalf("TestService/UnaryCallv(_, _) = _, %v; want <nil>, error code: %d", err, codes.DeadlineExceeded)
 		}
 	}
 }
@@ -1075,12 +1012,9 @@
 	}
 	ctx, cancel := context.WithCancel(context.Background())
 	time.AfterFunc(1*time.Millisecond, cancel)
-	reply, err := tc.UnaryCall(ctx, req)
-	if grpc.Code(err) != codes.Canceled {
-		t.Fatalf(`TestService/UnaryCall(_, _) = %v, %v; want <nil>, error code: %d`, reply, err, codes.Canceled)
+	if r, err := tc.UnaryCall(ctx, req); grpc.Code(err) != codes.Canceled {
+		t.Fatalf("TestService/UnaryCall(_, _) = %v, %v; want _, error code: %d", r, err, codes.Canceled)
 	}
-	cc.Close()
-
 	awaitNewConnLogOutput()
 }
 
@@ -1566,6 +1500,61 @@
 	}
 }
 
+func TestStreamsQuotaRecovery(t *testing.T) {
+	defer leakCheck(t)()
+	for _, e := range listTestEnv() {
+		testStreamsQuotaRecovery(t, e)
+	}
+}
+
+func testStreamsQuotaRecovery(t *testing.T, e env) {
+	te := newTest(t, e)
+	te.declareLogNoise(
+		"http2Client.notifyError got notified that the client transport was broken",
+		"Conn.resetTransport failed to create client transport",
+		"grpc: the client connection is closing",
+	)
+	te.maxStream = 1 // Allows 1 live stream.
+	te.startServer()
+	defer te.tearDown()
+
+	cc := te.clientConn()
+	tc := testpb.NewTestServiceClient(cc)
+	ctx, cancel := context.WithCancel(context.Background())
+	if _, err := tc.StreamingInputCall(ctx); err != nil {
+		t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, <nil>", tc, err)
+	}
+	// Loop until the new max stream setting is effective.
+	for {
+		ctx, cancel := context.WithTimeout(context.Background(), time.Second)
+		defer cancel()
+		_, err := tc.StreamingInputCall(ctx)
+		if err == nil {
+			time.Sleep(time.Second)
+			continue
+		}
+		if grpc.Code(err) == codes.DeadlineExceeded {
+			break
+		}
+		t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, %d", tc, err, codes.DeadlineExceeded)
+	}
+	cancel()
+
+	var wg sync.WaitGroup
+	for i := 0; i < 100; i++ {
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			ctx, cancel := context.WithCancel(context.Background())
+			if _, err := tc.StreamingInputCall(ctx); err != nil {
+				t.Errorf("%v.StreamingInputCall(_) = _, %v, want _, <nil>", tc, err)
+			}
+			cancel()
+		}()
+	}
+	wg.Wait()
+}
+
 func TestCompressServerHasNoSupport(t *testing.T) {
 	defer leakCheck(t)()
 	for _, e := range listTestEnv() {
@@ -1592,8 +1581,8 @@
 		ResponseSize: proto.Int32(respSize),
 		Payload:      payload,
 	}
-	if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.InvalidArgument {
-		t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code %d", err, codes.InvalidArgument)
+	if _, err := tc.UnaryCall(context.Background(), req); err == nil || grpc.Code(err) != codes.Unimplemented {
+		t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, error code %d", err, codes.Unimplemented)
 	}
 	// Streaming RPC
 	stream, err := tc.FullDuplexCall(context.Background())
@@ -1617,8 +1606,8 @@
 	if err := stream.Send(sreq); err != nil {
 		t.Fatalf("%v.Send(%v) = %v, want <nil>", stream, sreq, err)
 	}
-	if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.InvalidArgument {
-		t.Fatalf("%v.Recv() = %v, want error code %d", stream, err, codes.InvalidArgument)
+	if _, err := stream.Recv(); err == nil || grpc.Code(err) != codes.Unimplemented {
+		t.Fatalf("%v.Recv() = %v, want error code %d", stream, err, codes.Unimplemented)
 	}
 }
 
@@ -1649,7 +1638,8 @@
 		ResponseSize: proto.Int32(respSize),
 		Payload:      payload,
 	}
-	if _, err := tc.UnaryCall(context.Background(), req); err != nil {
+	ctx := metadata.NewContext(context.Background(), metadata.Pairs("something", "something"))
+	if _, err := tc.UnaryCall(ctx, req); err != nil {
 		t.Fatalf("TestService/UnaryCall(_, _) = _, %v, want _, <nil>", err)
 	}
 	// Streaming RPC
@@ -1681,6 +1671,88 @@
 	}
 }
 
+func TestUnaryServerInterceptor(t *testing.T) {
+	defer leakCheck(t)()
+	for _, e := range listTestEnv() {
+		testUnaryServerInterceptor(t, e)
+	}
+}
+
+func errInjector(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
+	return nil, grpc.Errorf(codes.PermissionDenied, "")
+}
+
+func testUnaryServerInterceptor(t *testing.T, e env) {
+	te := newTest(t, e)
+	te.unaryInt = errInjector
+	te.startServer()
+	defer te.tearDown()
+
+	tc := testpb.NewTestServiceClient(te.clientConn())
+	if _, err := tc.EmptyCall(context.Background(), &testpb.Empty{}); grpc.Code(err) != codes.PermissionDenied {
+		t.Fatalf("%v.EmptyCall(_, _) = _, %v, want _, error code %d", tc, err, codes.PermissionDenied)
+	}
+}
+
+func TestStreamServerInterceptor(t *testing.T) {
+	defer leakCheck(t)()
+	for _, e := range listTestEnv() {
+		// TODO(bradfitz): Temporarily skip this env due to #619.
+		if e.name == "handler-tls" {
+			continue
+		}
+		testStreamServerInterceptor(t, e)
+	}
+}
+
+func fullDuplexOnly(srv interface{}, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
+	if info.FullMethod == "/grpc.testing.TestService/FullDuplexCall" {
+		return handler(srv, ss)
+	}
+	// Reject the other methods.
+	return grpc.Errorf(codes.PermissionDenied, "")
+}
+
+func testStreamServerInterceptor(t *testing.T, e env) {
+	te := newTest(t, e)
+	te.streamInt = fullDuplexOnly
+	te.startServer()
+	defer te.tearDown()
+
+	tc := testpb.NewTestServiceClient(te.clientConn())
+	respParam := []*testpb.ResponseParameters{
+		{
+			Size: proto.Int32(int32(1)),
+		},
+	}
+	payload, err := newPayload(testpb.PayloadType_COMPRESSABLE, int32(1))
+	if err != nil {
+		t.Fatal(err)
+	}
+	req := &testpb.StreamingOutputCallRequest{
+		ResponseType:       testpb.PayloadType_COMPRESSABLE.Enum(),
+		ResponseParameters: respParam,
+		Payload:            payload,
+	}
+	s1, err := tc.StreamingOutputCall(context.Background(), req)
+	if err != nil {
+		t.Fatalf("%v.StreamingOutputCall(_) = _, %v, want _, <nil>", tc, err)
+	}
+	if _, err := s1.Recv(); grpc.Code(err) != codes.PermissionDenied {
+		t.Fatalf("%v.StreamingInputCall(_) = _, %v, want _, error code %d", tc, err, codes.PermissionDenied)
+	}
+	s2, err := tc.FullDuplexCall(context.Background())
+	if err != nil {
+		t.Fatalf("%v.FullDuplexCall(_) = _, %v, want <nil>", tc, err)
+	}
+	if err := s2.Send(req); err != nil {
+		t.Fatalf("%v.Send(_) = %v, want <nil>", s2, err)
+	}
+	if _, err := s2.Recv(); err != nil {
+		t.Fatalf("%v.Recv() = _, %v, want _, <nil>", s2, err)
+	}
+}
+
 // funcServer implements methods of TestServiceServer using funcs,
 // similar to an http.HandlerFunc.
 // Any unimplemented method will crash. Tests implement the method(s)
@@ -1839,6 +1911,7 @@
 			strings.Contains(stack, "testing.Main(") ||
 			strings.Contains(stack, "runtime.goexit") ||
 			strings.Contains(stack, "created by runtime.gc") ||
+			strings.Contains(stack, "created by google3/base/go/log.init") ||
 			strings.Contains(stack, "interestingGoroutines") ||
 			strings.Contains(stack, "runtime.MHeap_Scavenger") ||
 			strings.Contains(stack, "signal.signal_recv") ||
diff --git a/go/src/google.golang.org/grpc/test/grpc_testing/test.pb.go b/go/src/google.golang.org/grpc/test/grpc_testing/test.pb.go
index 7b0803f..5a93520 100644
--- a/go/src/google.golang.org/grpc/test/grpc_testing/test.pb.go
+++ b/go/src/google.golang.org/grpc/test/grpc_testing/test.pb.go
@@ -1,12 +1,12 @@
 // Code generated by protoc-gen-go.
-// source: test.proto
+// source: test/grpc_testing/test.proto
 // DO NOT EDIT!
 
 /*
 Package grpc_testing is a generated protocol buffer package.
 
 It is generated from these files:
-	test.proto
+	test/grpc_testing/test.proto
 
 It has these top-level messages:
 	Empty
@@ -356,6 +356,10 @@
 var _ context.Context
 var _ grpc.ClientConn
 
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion2
+
 // Client API for TestService service
 
 type TestServiceClient interface {
@@ -564,28 +568,40 @@
 	s.RegisterService(&_TestService_serviceDesc, srv)
 }
 
-func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _TestService_EmptyCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(Empty)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(TestServiceServer).EmptyCall(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(TestServiceServer).EmptyCall(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.TestService/EmptyCall",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(TestServiceServer).EmptyCall(ctx, req.(*Empty))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
-func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error) (interface{}, error) {
+func _TestService_UnaryCall_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
 	in := new(SimpleRequest)
 	if err := dec(in); err != nil {
 		return nil, err
 	}
-	out, err := srv.(TestServiceServer).UnaryCall(ctx, in)
-	if err != nil {
-		return nil, err
+	if interceptor == nil {
+		return srv.(TestServiceServer).UnaryCall(ctx, in)
 	}
-	return out, nil
+	info := &grpc.UnaryServerInfo{
+		Server:     srv,
+		FullMethod: "/grpc.testing.TestService/UnaryCall",
+	}
+	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+		return srv.(TestServiceServer).UnaryCall(ctx, req.(*SimpleRequest))
+	}
+	return interceptor(ctx, in, info, handler)
 }
 
 func _TestService_StreamingOutputCall_Handler(srv interface{}, stream grpc.ServerStream) error {
@@ -727,41 +743,38 @@
 }
 
 var fileDescriptor0 = []byte{
-	// 567 bytes of a gzipped FileDescriptorProto
-	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x54, 0x51, 0x6f, 0xd2, 0x50,
-	0x14, 0xb6, 0x03, 0x64, 0x1c, 0x58, 0x43, 0x0e, 0x59, 0x64, 0x9d, 0x89, 0x4b, 0x7d, 0xb0, 0x9a,
-	0x88, 0x86, 0x44, 0x1f, 0x35, 0x73, 0x63, 0x71, 0x09, 0x03, 0x6c, 0xe1, 0x99, 0x5c, 0xe1, 0x0e,
-	0x9b, 0x94, 0xb6, 0xb6, 0xb7, 0x46, 0x7c, 0xf0, 0x8f, 0xf9, 0x67, 0xfc, 0x11, 0xfe, 0x00, 0xef,
-	0xbd, 0x6d, 0xa1, 0x40, 0x17, 0x99, 0xc6, 0xbd, 0xb5, 0xdf, 0xf9, 0xce, 0x77, 0xbe, 0xef, 0x9e,
-	0xdb, 0x02, 0x30, 0x1a, 0xb2, 0x96, 0x1f, 0x78, 0xcc, 0xc3, 0xda, 0x2c, 0xf0, 0x27, 0x2d, 0x01,
-	0xd8, 0xee, 0x4c, 0x2f, 0x43, 0xa9, 0x33, 0xf7, 0xd9, 0x42, 0xef, 0x42, 0x79, 0x40, 0x16, 0x8e,
-	0x47, 0xa6, 0xf8, 0x1c, 0x8a, 0x6c, 0xe1, 0xd3, 0xa6, 0x72, 0xa2, 0x18, 0x6a, 0xfb, 0xa8, 0x95,
-	0x6d, 0x68, 0x25, 0xa4, 0x21, 0x27, 0x98, 0x92, 0x86, 0x08, 0xc5, 0x8f, 0xde, 0x74, 0xd1, 0xdc,
-	0xe3, 0xf4, 0x9a, 0x29, 0x9f, 0xf5, 0x5f, 0x0a, 0x1c, 0x58, 0xf6, 0xdc, 0x77, 0xa8, 0x49, 0x3f,
-	0x47, 0xbc, 0x15, 0xdf, 0xc0, 0x41, 0x40, 0x43, 0xdf, 0x73, 0x43, 0x3a, 0xde, 0x4d, 0xbd, 0x96,
-	0xf2, 0xc5, 0x1b, 0x3e, 0xce, 0xf4, 0x87, 0xf6, 0x37, 0x2a, 0xc7, 0x95, 0x56, 0x24, 0x8b, 0x63,
-	0xf8, 0x02, 0xca, 0x7e, 0xac, 0xd0, 0x2c, 0xf0, 0x72, 0xb5, 0x7d, 0x98, 0x2b, 0x6f, 0xa6, 0x2c,
-	0xa1, 0x7a, 0x6d, 0x3b, 0xce, 0x38, 0x0a, 0x69, 0xe0, 0x92, 0x39, 0x6d, 0x16, 0x79, 0xdb, 0xbe,
-	0x59, 0x13, 0xe0, 0x28, 0xc1, 0xd0, 0x80, 0xba, 0x24, 0x79, 0x24, 0x62, 0x9f, 0xc6, 0xe1, 0xc4,
-	0xe3, 0xee, 0x4b, 0x92, 0xa7, 0x0a, 0xbc, 0x2f, 0x60, 0x4b, 0xa0, 0xfa, 0x77, 0x50, 0xd3, 0xd4,
-	0xb1, 0xab, 0xac, 0x23, 0x65, 0x27, 0x47, 0x1a, 0xec, 0x2f, 0xcd, 0x88, 0x88, 0x15, 0x73, 0xf9,
-	0x8e, 0x8f, 0xa0, 0x9a, 0xf5, 0x50, 0x90, 0x65, 0xf0, 0x56, 0xf3, 0xbb, 0x70, 0x64, 0xb1, 0x80,
-	0x92, 0x39, 0x97, 0xbe, 0x74, 0xfd, 0x88, 0x9d, 0x11, 0xc7, 0x49, 0x37, 0x70, 0x5b, 0x2b, 0xfa,
-	0x10, 0xb4, 0x3c, 0xb5, 0x24, 0xd9, 0x6b, 0x78, 0x40, 0x66, 0xb3, 0x80, 0xce, 0x08, 0xa3, 0xd3,
-	0x71, 0xd2, 0x13, 0xaf, 0x46, 0x91, 0xab, 0x39, 0x5c, 0x95, 0x13, 0x69, 0xb1, 0x23, 0xfd, 0x12,
-	0x30, 0xd5, 0x18, 0x90, 0x80, 0xc7, 0x62, 0x34, 0x08, 0xc5, 0x25, 0xca, 0xb4, 0xca, 0x67, 0x11,
-	0xd7, 0x76, 0x79, 0xf5, 0x0b, 0x11, 0x0b, 0x4a, 0x16, 0x0e, 0x29, 0x34, 0x0a, 0xf5, 0x9f, 0x4a,
-	0xc6, 0x61, 0x3f, 0x62, 0x1b, 0x81, 0xff, 0xf5, 0xca, 0x7d, 0x80, 0xc6, 0xb2, 0xdf, 0x5f, 0x5a,
-	0xe5, 0x3e, 0x0a, 0xfc, 0xf0, 0x4e, 0xd6, 0x55, 0xb6, 0x23, 0x99, 0x18, 0x6c, 0xc7, 0xbc, 0xed,
-	0x05, 0xd5, 0x7b, 0x70, 0x9c, 0x9b, 0xf0, 0x2f, 0xaf, 0xd7, 0xb3, 0xb7, 0x50, 0xcd, 0x04, 0xc6,
-	0x3a, 0xd4, 0xce, 0xfa, 0x57, 0x03, 0xb3, 0x63, 0x59, 0xa7, 0xef, 0xba, 0x9d, 0xfa, 0x3d, 0xbe,
-	0x08, 0x75, 0xd4, 0x5b, 0xc3, 0x14, 0x04, 0xb8, 0x6f, 0x9e, 0xf6, 0xce, 0xfb, 0x57, 0xf5, 0xbd,
-	0xf6, 0x8f, 0x22, 0x54, 0x87, 0x5c, 0xdd, 0xe2, 0x4b, 0xb0, 0x27, 0x14, 0x5f, 0x41, 0x45, 0xfe,
-	0x40, 0x84, 0x2d, 0x6c, 0xac, 0x4f, 0x97, 0x05, 0x2d, 0x0f, 0xc4, 0x0b, 0xa8, 0x8c, 0x5c, 0x12,
-	0xc4, 0x6d, 0xc7, 0xeb, 0x8c, 0xb5, 0x1f, 0x87, 0xf6, 0x30, 0xbf, 0x98, 0x1c, 0x80, 0x03, 0x8d,
-	0x9c, 0xf3, 0x41, 0x63, 0xa3, 0xe9, 0xc6, 0x4b, 0xa2, 0x3d, 0xdd, 0x81, 0x19, 0xcf, 0x7a, 0xa9,
-	0xa0, 0x0d, 0xb8, 0xfd, 0x45, 0xe0, 0x93, 0x1b, 0x24, 0x36, 0xbf, 0x40, 0xcd, 0xf8, 0x33, 0x31,
-	0x1e, 0x65, 0x88, 0x51, 0xea, 0x45, 0xe4, 0x38, 0xe7, 0x11, 0x4f, 0xfb, 0xf5, 0xbf, 0x65, 0x32,
-	0x14, 0x99, 0x4a, 0x7d, 0x4f, 0x9c, 0xeb, 0x3b, 0x18, 0xf5, 0x3b, 0x00, 0x00, 0xff, 0xff, 0x4c,
-	0x41, 0xfe, 0xb6, 0x89, 0x06, 0x00, 0x00,
+	// 517 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x53, 0xcd, 0x6e, 0xd3, 0x40,
+	0x10, 0xc6, 0x6d, 0x42, 0x9a, 0x49, 0x6a, 0x59, 0x13, 0x55, 0xb8, 0x6e, 0x25, 0x2a, 0x1f, 0xa8,
+	0xe1, 0x90, 0x56, 0x91, 0x10, 0xa7, 0x0a, 0x4a, 0x9a, 0x0a, 0x24, 0xda, 0x44, 0x71, 0x7b, 0xb6,
+	0x96, 0x64, 0x6b, 0x2c, 0x6d, 0x6c, 0x63, 0xaf, 0x11, 0xe1, 0xad, 0x38, 0x70, 0xe2, 0xe5, 0xd8,
+	0xb5, 0x9d, 0x60, 0x07, 0x17, 0x92, 0x03, 0x3d, 0x25, 0x9a, 0xf9, 0xfe, 0x66, 0xc6, 0x0b, 0x87,
+	0x9c, 0xc6, 0xfc, 0xc4, 0x8d, 0xc2, 0x89, 0x23, 0xff, 0x79, 0xbe, 0x7b, 0x22, 0x7f, 0xbb, 0x61,
+	0x14, 0xf0, 0x00, 0xdb, 0xb2, 0xd1, 0xcd, 0x1b, 0x66, 0x03, 0xea, 0x83, 0x59, 0xc8, 0xe7, 0xe6,
+	0x1b, 0x68, 0x8c, 0xc8, 0x9c, 0x05, 0x64, 0x8a, 0xc7, 0x50, 0xe3, 0xf3, 0x90, 0xea, 0xca, 0x91,
+	0x62, 0xa9, 0xbd, 0xfd, 0x6e, 0x91, 0xd0, 0xcd, 0x41, 0x37, 0x02, 0x80, 0x6d, 0xa8, 0x7d, 0x0c,
+	0xa6, 0x73, 0x7d, 0x4b, 0x00, 0xdb, 0xe6, 0x77, 0x05, 0x76, 0x6d, 0x6f, 0x16, 0x32, 0x3a, 0xa6,
+	0x9f, 0x13, 0x01, 0xc7, 0x53, 0xd8, 0x8d, 0x68, 0x1c, 0x06, 0x7e, 0x4c, 0x9d, 0xf5, 0x14, 0xf7,
+	0x0a, 0x8c, 0xd8, 0xfb, 0x46, 0x53, 0xe9, 0x3a, 0x3e, 0x83, 0x46, 0x98, 0xa1, 0xf4, 0x6d, 0x51,
+	0x68, 0xf5, 0xf6, 0x2a, 0x25, 0x24, 0xfd, 0xce, 0x63, 0xcc, 0x49, 0x62, 0x1a, 0xf9, 0x64, 0x46,
+	0xf5, 0x9a, 0x40, 0xef, 0xa0, 0x0e, 0x5a, 0x5a, 0x0e, 0x48, 0xc2, 0x3f, 0x39, 0xf1, 0x24, 0x10,
+	0x51, 0xea, 0xb2, 0x63, 0x3a, 0xa0, 0x2e, 0x22, 0x67, 0xae, 0x45, 0x2b, 0xe5, 0x6f, 0x56, 0x1a,
+	0xec, 0x2c, 0x5d, 0x64, 0xc8, 0x26, 0x76, 0xa0, 0x55, 0x34, 0x90, 0x41, 0x9b, 0x66, 0x1f, 0xf6,
+	0x6d, 0x1e, 0x51, 0x32, 0x13, 0xdc, 0xf7, 0x7e, 0x98, 0xf0, 0x3e, 0x61, 0x6c, 0xb1, 0x9f, 0x35,
+	0xbd, 0xcc, 0x33, 0x30, 0xaa, 0x44, 0xf2, 0xc4, 0x4f, 0xe1, 0x09, 0x71, 0xdd, 0x88, 0xba, 0x84,
+	0xd3, 0xa9, 0x93, 0x0b, 0x66, 0xdb, 0x93, 0xaa, 0x75, 0xf3, 0x15, 0xe0, 0x02, 0x3c, 0x22, 0x91,
+	0x08, 0xcc, 0x69, 0x14, 0xcb, 0xe3, 0xfd, 0xc6, 0xc8, 0xf0, 0x9e, 0x2f, 0xea, 0x5f, 0x88, 0xdc,
+	0x5e, 0xb6, 0x76, 0xf3, 0x87, 0x52, 0x30, 0x1e, 0x26, 0x7c, 0x25, 0xfe, 0xe6, 0xe7, 0x3d, 0x83,
+	0xce, 0x92, 0x11, 0x2e, 0xa3, 0x08, 0xb7, 0x6d, 0x31, 0xfc, 0x51, 0x99, 0x57, 0x11, 0x79, 0xcd,
+	0xcf, 0xc0, 0x1c, 0xc0, 0x41, 0x65, 0xec, 0xcd, 0x4e, 0xfc, 0xe2, 0x35, 0xb4, 0x8a, 0xe1, 0x35,
+	0x68, 0xf7, 0x87, 0x57, 0xa3, 0xf1, 0xc0, 0xb6, 0xcf, 0xdf, 0x7e, 0x18, 0x68, 0x8f, 0x10, 0x41,
+	0xbd, 0xbd, 0x2e, 0xd5, 0x14, 0x04, 0x78, 0x3c, 0x3e, 0xbf, 0xbe, 0x18, 0x5e, 0x69, 0x5b, 0xbd,
+	0x9f, 0x35, 0x68, 0xdd, 0x08, 0x51, 0x5b, 0xec, 0xd5, 0x9b, 0x50, 0x7c, 0x09, 0xcd, 0xf4, 0xb1,
+	0xc9, 0x34, 0xd8, 0x29, 0x9b, 0xa6, 0x0d, 0xa3, 0xaa, 0x88, 0x97, 0xd0, 0xbc, 0xf5, 0x49, 0x94,
+	0xd1, 0x0e, 0xca, 0x88, 0xd2, 0x83, 0x33, 0x0e, 0xab, 0x9b, 0xf9, 0xdc, 0x0c, 0x3a, 0x15, 0x6b,
+	0x41, 0x6b, 0x85, 0x74, 0xef, 0xc1, 0x8d, 0xe7, 0x6b, 0x20, 0x33, 0xaf, 0x53, 0x05, 0x3d, 0xc0,
+	0x3f, 0x3f, 0x5a, 0x3c, 0xbe, 0x47, 0x62, 0xf5, 0x6d, 0x18, 0xd6, 0xbf, 0x81, 0x99, 0x95, 0x25,
+	0xad, 0xd4, 0xcb, 0x84, 0xb1, 0x8b, 0x44, 0x4c, 0xfb, 0xf5, 0xbf, 0xcd, 0x64, 0x29, 0xe9, 0x54,
+	0xea, 0x3b, 0xc2, 0xee, 0x1e, 0xc0, 0xea, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff, 0x17, 0x00, 0x01,
+	0xdd, 0xc7, 0x05, 0x00, 0x00,
 }
diff --git a/go/src/google.golang.org/grpc/trace.go b/go/src/google.golang.org/grpc/trace.go
index cde04fb..f6747e1 100644
--- a/go/src/google.golang.org/grpc/trace.go
+++ b/go/src/google.golang.org/grpc/trace.go
@@ -101,9 +101,8 @@
 func (p payload) String() string {
 	if p.sent {
 		return fmt.Sprintf("sent: %v", p.msg)
-	} else {
-		return fmt.Sprintf("recv: %v", p.msg)
 	}
+	return fmt.Sprintf("recv: %v", p.msg)
 }
 
 type fmtStringer struct {
diff --git a/go/src/google.golang.org/grpc/transport/control.go b/go/src/google.golang.org/grpc/transport/control.go
index c99f363..7e9bdf3 100644
--- a/go/src/google.golang.org/grpc/transport/control.go
+++ b/go/src/google.golang.org/grpc/transport/control.go
@@ -162,10 +162,6 @@
 type inFlow struct {
 	// The inbound flow control limit for pending data.
 	limit uint32
-	// conn points to the shared connection-level inFlow that is shared
-	// by all streams on that conn. It is nil for the inFlow on the conn
-	// directly.
-	conn *inFlow
 
 	mu sync.Mutex
 	// pendingData is the overall data which have been received but not been
@@ -176,75 +172,39 @@
 	pendingUpdate uint32
 }
 
-// onData is invoked when some data frame is received. It increments not only its
-// own pendingData but also that of the associated connection-level flow.
+// onData is invoked when some data frame is received. It updates pendingData.
 func (f *inFlow) onData(n uint32) error {
-	if n == 0 {
-		return nil
-	}
 	f.mu.Lock()
 	defer f.mu.Unlock()
-	if f.pendingData+f.pendingUpdate+n > f.limit {
-		return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", f.pendingData+f.pendingUpdate+n, f.limit)
-	}
-	if f.conn != nil {
-		if err := f.conn.onData(n); err != nil {
-			return ConnectionErrorf("%v", err)
-		}
-	}
 	f.pendingData += n
+	if f.pendingData+f.pendingUpdate > f.limit {
+		return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", f.pendingData+f.pendingUpdate, f.limit)
+	}
 	return nil
 }
 
-// connOnRead updates the connection level states when the application consumes data.
-func (f *inFlow) connOnRead(n uint32) uint32 {
-	if n == 0 || f.conn != nil {
-		return 0
-	}
+// onRead is invoked when the application reads the data. It returns the window size
+// to be sent to the peer.
+func (f *inFlow) onRead(n uint32) uint32 {
 	f.mu.Lock()
 	defer f.mu.Unlock()
+	if f.pendingData == 0 {
+		return 0
+	}
 	f.pendingData -= n
 	f.pendingUpdate += n
 	if f.pendingUpdate >= f.limit/4 {
-		ret := f.pendingUpdate
+		wu := f.pendingUpdate
 		f.pendingUpdate = 0
-		return ret
+		return wu
 	}
 	return 0
 }
 
-// onRead is invoked when the application reads the data. It returns the window updates
-// for both stream and connection level.
-func (f *inFlow) onRead(n uint32) (swu, cwu uint32) {
-	if n == 0 {
-		return
-	}
-	f.mu.Lock()
-	defer f.mu.Unlock()
-	if f.pendingData == 0 {
-		// pendingData has been adjusted by restoreConn.
-		return
-	}
-	f.pendingData -= n
-	f.pendingUpdate += n
-	if f.pendingUpdate >= f.limit/4 {
-		swu = f.pendingUpdate
-		f.pendingUpdate = 0
-	}
-	cwu = f.conn.connOnRead(n)
-	return
-}
-
-// restoreConn is invoked when a stream is terminated. It removes its stake in
-// the connection-level flow and resets its own state.
-func (f *inFlow) restoreConn() uint32 {
-	if f.conn == nil {
-		return 0
-	}
+func (f *inFlow) resetPendingData() uint32 {
 	f.mu.Lock()
 	defer f.mu.Unlock()
 	n := f.pendingData
 	f.pendingData = 0
-	f.pendingUpdate = 0
-	return f.conn.connOnRead(n)
+	return n
 }
diff --git a/go/src/google.golang.org/grpc/transport/handler_server.go b/go/src/google.golang.org/grpc/transport/handler_server.go
index fef541d..00d3855 100644
--- a/go/src/google.golang.org/grpc/transport/handler_server.go
+++ b/go/src/google.golang.org/grpc/transport/handler_server.go
@@ -65,7 +65,7 @@
 	if r.Method != "POST" {
 		return nil, errors.New("invalid gRPC request method")
 	}
-	if !strings.Contains(r.Header.Get("Content-Type"), "application/grpc") {
+	if !validContentType(r.Header.Get("Content-Type")) {
 		return nil, errors.New("invalid gRPC request content-type")
 	}
 	if _, ok := w.(http.Flusher); !ok {
@@ -92,9 +92,12 @@
 	}
 
 	var metakv []string
+	if r.Host != "" {
+		metakv = append(metakv, ":authority", r.Host)
+	}
 	for k, vv := range r.Header {
 		k = strings.ToLower(k)
-		if isReservedHeader(k) {
+		if isReservedHeader(k) && !isWhitelistedPseudoHeader(k) {
 			continue
 		}
 		for _, v := range vv {
@@ -108,7 +111,6 @@
 				}
 			}
 			metakv = append(metakv, k, v)
-
 		}
 	}
 	st.headerMD = metadata.Pairs(metakv...)
@@ -196,6 +198,10 @@
 		}
 		if md := s.Trailer(); len(md) > 0 {
 			for k, vv := range md {
+				// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
+				if isReservedHeader(k) {
+					continue
+				}
 				for _, v := range vv {
 					// http2 ResponseWriter mechanism to
 					// send undeclared Trailers after the
@@ -249,6 +255,10 @@
 		ht.writeCommonHeaders(s)
 		h := ht.rw.Header()
 		for k, vv := range md {
+			// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
+			if isReservedHeader(k) {
+				continue
+			}
 			for _, v := range vv {
 				h.Add(k, v)
 			}
diff --git a/go/src/google.golang.org/grpc/transport/http2_client.go b/go/src/google.golang.org/grpc/transport/http2_client.go
index 5d4a8c4..e624f8d 100644
--- a/go/src/google.golang.org/grpc/transport/http2_client.go
+++ b/go/src/google.golang.org/grpc/transport/http2_client.go
@@ -35,7 +35,6 @@
 
 import (
 	"bytes"
-	"errors"
 	"io"
 	"math"
 	"net"
@@ -140,29 +139,6 @@
 			conn.Close()
 		}
 	}()
-	// Send connection preface to server.
-	n, err := conn.Write(clientPreface)
-	if err != nil {
-		return nil, ConnectionErrorf("transport: %v", err)
-	}
-	if n != len(clientPreface) {
-		return nil, ConnectionErrorf("transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface))
-	}
-	framer := newFramer(conn)
-	if initialWindowSize != defaultWindowSize {
-		err = framer.writeSettings(true, http2.Setting{http2.SettingInitialWindowSize, uint32(initialWindowSize)})
-	} else {
-		err = framer.writeSettings(true)
-	}
-	if err != nil {
-		return nil, ConnectionErrorf("transport: %v", err)
-	}
-	// Adjust the connection flow control window if needed.
-	if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
-		if err := framer.writeWindowUpdate(true, 0, delta); err != nil {
-			return nil, ConnectionErrorf("transport: %v", err)
-		}
-	}
 	ua := primaryUA
 	if opts.UserAgent != "" {
 		ua = opts.UserAgent + " " + ua
@@ -178,7 +154,7 @@
 		writableChan:    make(chan int, 1),
 		shutdownChan:    make(chan struct{}),
 		errorChan:       make(chan struct{}),
-		framer:          framer,
+		framer:          newFramer(conn),
 		hBuf:            &buf,
 		hEnc:            hpack.NewEncoder(&buf),
 		controlBuf:      newRecvBuffer(),
@@ -191,28 +167,49 @@
 		maxStreams:      math.MaxInt32,
 		streamSendQuota: defaultWindowSize,
 	}
+	// Start the reader goroutine for incoming message. Each transport has
+	// a dedicated goroutine which reads HTTP2 frame from network. Then it
+	// dispatches the frame to the corresponding stream entity.
+	go t.reader()
+	// Send connection preface to server.
+	n, err := t.conn.Write(clientPreface)
+	if err != nil {
+		t.Close()
+		return nil, ConnectionErrorf("transport: %v", err)
+	}
+	if n != len(clientPreface) {
+		t.Close()
+		return nil, ConnectionErrorf("transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface))
+	}
+	if initialWindowSize != defaultWindowSize {
+		err = t.framer.writeSettings(true, http2.Setting{http2.SettingInitialWindowSize, uint32(initialWindowSize)})
+	} else {
+		err = t.framer.writeSettings(true)
+	}
+	if err != nil {
+		t.Close()
+		return nil, ConnectionErrorf("transport: %v", err)
+	}
+	// Adjust the connection flow control window if needed.
+	if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
+		if err := t.framer.writeWindowUpdate(true, 0, delta); err != nil {
+			t.Close()
+			return nil, ConnectionErrorf("transport: %v", err)
+		}
+	}
 	go t.controller()
 	t.writableChan <- 0
-	// Start the reader goroutine for incoming message. The threading model
-	// on receiving is that each transport has a dedicated goroutine which
-	// reads HTTP2 frame from network. Then it dispatches the frame to the
-	// corresponding stream entity.
-	go t.reader()
 	return t, nil
 }
 
 func (t *http2Client) newStream(ctx context.Context, callHdr *CallHdr) *Stream {
-	fc := &inFlow{
-		limit: initialWindowSize,
-		conn:  t.fc,
-	}
 	// TODO(zhaoq): Handle uint32 overflow of Stream.id.
 	s := &Stream{
 		id:            t.nextID,
 		method:        callHdr.Method,
 		sendCompress:  callHdr.SendCompress,
 		buf:           newRecvBuffer(),
-		fc:            fc,
+		fc:            &inFlow{limit: initialWindowSize},
 		sendQuotaPool: newQuotaPool(int(t.streamSendQuota)),
 		headerChan:    make(chan struct{}),
 	}
@@ -236,9 +233,11 @@
 	var timeout time.Duration
 	if dl, ok := ctx.Deadline(); ok {
 		timeout = dl.Sub(time.Now())
-		if timeout <= 0 {
-			return nil, ContextErr(context.DeadlineExceeded)
-		}
+	}
+	select {
+	case <-ctx.Done():
+		return nil, ContextErr(ctx.Err())
+	default:
 	}
 	pr := &peer.Peer{
 		Addr: t.conn.RemoteAddr(),
@@ -272,6 +271,10 @@
 		}
 	}
 	t.mu.Lock()
+	if t.activeStreams == nil {
+		t.mu.Unlock()
+		return nil, ErrConnClosing
+	}
 	if t.state != reachable {
 		t.mu.Unlock()
 		return nil, ErrConnClosing
@@ -289,7 +292,10 @@
 		}
 	}
 	if _, err := wait(ctx, t.shutdownChan, t.writableChan); err != nil {
-		// t.streamsQuota will be updated when t.CloseStream is invoked.
+		// Return the quota back now because there is no stream returned to the caller.
+		if _, ok := err.(StreamError); ok && checkStreamsQuota {
+			t.streamsQuota.add(1)
+		}
 		return nil, err
 	}
 	t.mu.Lock()
@@ -341,6 +347,10 @@
 	if md, ok := metadata.FromContext(ctx); ok {
 		hasMD = true
 		for k, v := range md {
+			// HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set.
+			if isReservedHeader(k) {
+				continue
+			}
 			for _, entry := range v {
 				t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
 			}
@@ -390,9 +400,19 @@
 func (t *http2Client) CloseStream(s *Stream, err error) {
 	var updateStreams bool
 	t.mu.Lock()
+	if t.activeStreams == nil {
+		t.mu.Unlock()
+		return
+	}
 	if t.streamsQuota != nil {
 		updateStreams = true
 	}
+	if t.state == draining && len(t.activeStreams) == 1 {
+		// The transport is draining and s is the last live stream on t.
+		t.mu.Unlock()
+		t.Close()
+		return
+	}
 	delete(t.activeStreams, s.id)
 	t.mu.Unlock()
 	if updateStreams {
@@ -404,8 +424,10 @@
 	// other goroutines.
 	s.cancel()
 	s.mu.Lock()
-	if q := s.fc.restoreConn(); q > 0 {
-		t.controlBuf.put(&windowUpdate{0, q})
+	if q := s.fc.resetPendingData(); q > 0 {
+		if n := t.fc.onRead(q); n > 0 {
+			t.controlBuf.put(&windowUpdate{0, n})
+		}
 	}
 	if s.state == streamDone {
 		s.mu.Unlock()
@@ -427,9 +449,12 @@
 // accessed any more.
 func (t *http2Client) Close() (err error) {
 	t.mu.Lock()
+	if t.state == reachable {
+		close(t.errorChan)
+	}
 	if t.state == closing {
 		t.mu.Unlock()
-		return errors.New("transport: Close() was already called")
+		return
 	}
 	t.state = closing
 	t.mu.Unlock()
@@ -452,6 +477,25 @@
 	return
 }
 
+func (t *http2Client) GracefulClose() error {
+	t.mu.Lock()
+	if t.state == closing {
+		t.mu.Unlock()
+		return nil
+	}
+	if t.state == draining {
+		t.mu.Unlock()
+		return nil
+	}
+	t.state = draining
+	active := len(t.activeStreams)
+	t.mu.Unlock()
+	if active == 0 {
+		return t.Close()
+	}
+	return nil
+}
+
 // Write formats the data into HTTP2 data frame(s) and sends it out. The caller
 // should proceed only if Write returns nil.
 // TODO(zhaoq): opts.Delay is ignored in this implementation. Support it later
@@ -505,6 +549,10 @@
 		t.framer.adjustNumWriters(1)
 		// Got some quota. Try to acquire writing privilege on the transport.
 		if _, err := wait(s.ctx, t.shutdownChan, t.writableChan); err != nil {
+			if _, ok := err.(StreamError); ok {
+				// Return the connection quota back.
+				t.sendQuotaPool.add(len(p))
+			}
 			if t.framer.adjustNumWriters(-1) == 0 {
 				// This writer is the last one in this batch and has the
 				// responsibility to flush the buffered frames. It queues
@@ -514,6 +562,16 @@
 			}
 			return err
 		}
+		select {
+		case <-s.ctx.Done():
+			t.sendQuotaPool.add(len(p))
+			if t.framer.adjustNumWriters(-1) == 0 {
+				t.controlBuf.put(&flushIO{})
+			}
+			t.writableChan <- 0
+			return ContextErr(s.ctx.Err())
+		default:
+		}
 		if r.Len() == 0 && t.framer.adjustNumWriters(0) == 1 {
 			// Do a force flush iff this is last frame for the entire gRPC message
 			// and the caller is the only writer at this moment.
@@ -560,33 +618,44 @@
 // Window updates will deliver to the controller for sending when
 // the cumulative quota exceeds the corresponding threshold.
 func (t *http2Client) updateWindow(s *Stream, n uint32) {
-	swu, cwu := s.fc.onRead(n)
-	if swu > 0 {
-		t.controlBuf.put(&windowUpdate{s.id, swu})
+	s.mu.Lock()
+	defer s.mu.Unlock()
+	if s.state == streamDone {
+		return
 	}
-	if cwu > 0 {
-		t.controlBuf.put(&windowUpdate{0, cwu})
+	if w := t.fc.onRead(n); w > 0 {
+		t.controlBuf.put(&windowUpdate{0, w})
+	}
+	if w := s.fc.onRead(n); w > 0 {
+		t.controlBuf.put(&windowUpdate{s.id, w})
 	}
 }
 
 func (t *http2Client) handleData(f *http2.DataFrame) {
+	size := len(f.Data())
+	if err := t.fc.onData(uint32(size)); err != nil {
+		t.notifyError(ConnectionErrorf("%v", err))
+		return
+	}
 	// Select the right stream to dispatch.
 	s, ok := t.getStream(f)
 	if !ok {
+		if w := t.fc.onRead(uint32(size)); w > 0 {
+			t.controlBuf.put(&windowUpdate{0, w})
+		}
 		return
 	}
-	size := len(f.Data())
 	if size > 0 {
+		s.mu.Lock()
+		if s.state == streamDone {
+			s.mu.Unlock()
+			// The stream has been closed. Release the corresponding quota.
+			if w := t.fc.onRead(uint32(size)); w > 0 {
+				t.controlBuf.put(&windowUpdate{0, w})
+			}
+			return
+		}
 		if err := s.fc.onData(uint32(size)); err != nil {
-			if _, ok := err.(ConnectionError); ok {
-				t.notifyError(err)
-				return
-			}
-			s.mu.Lock()
-			if s.state == streamDone {
-				s.mu.Unlock()
-				return
-			}
 			s.state = streamDone
 			s.statusCode = codes.Internal
 			s.statusDesc = err.Error()
@@ -595,6 +664,7 @@
 			t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
 			return
 		}
+		s.mu.Unlock()
 		// TODO(bradfitz, zhaoq): A copy is required here because there is no
 		// guarantee f.Data() is consumed before the arrival of next frame.
 		// Can this copy be eliminated?
diff --git a/go/src/google.golang.org/grpc/transport/http2_server.go b/go/src/google.golang.org/grpc/transport/http2_server.go
index 0316423..1c4d585 100644
--- a/go/src/google.golang.org/grpc/transport/http2_server.go
+++ b/go/src/google.golang.org/grpc/transport/http2_server.go
@@ -139,15 +139,11 @@
 // operateHeader takes action on the decoded headers.
 func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream)) {
 	buf := newRecvBuffer()
-	fc := &inFlow{
-		limit: initialWindowSize,
-		conn:  t.fc,
-	}
 	s := &Stream{
 		id:  frame.Header().StreamID,
 		st:  t,
 		buf: buf,
-		fc:  fc,
+		fc:  &inFlow{limit: initialWindowSize},
 	}
 
 	var state decodeState
@@ -307,33 +303,51 @@
 // Window updates will deliver to the controller for sending when
 // the cumulative quota exceeds the corresponding threshold.
 func (t *http2Server) updateWindow(s *Stream, n uint32) {
-	swu, cwu := s.fc.onRead(n)
-	if swu > 0 {
-		t.controlBuf.put(&windowUpdate{s.id, swu})
+	s.mu.Lock()
+	defer s.mu.Unlock()
+	if s.state == streamDone {
+		return
 	}
-	if cwu > 0 {
-		t.controlBuf.put(&windowUpdate{0, cwu})
+	if w := t.fc.onRead(n); w > 0 {
+		t.controlBuf.put(&windowUpdate{0, w})
+	}
+	if w := s.fc.onRead(n); w > 0 {
+		t.controlBuf.put(&windowUpdate{s.id, w})
 	}
 }
 
 func (t *http2Server) handleData(f *http2.DataFrame) {
+	size := len(f.Data())
+	if err := t.fc.onData(uint32(size)); err != nil {
+		grpclog.Printf("transport: http2Server %v", err)
+		t.Close()
+		return
+	}
 	// Select the right stream to dispatch.
 	s, ok := t.getStream(f)
 	if !ok {
+		if w := t.fc.onRead(uint32(size)); w > 0 {
+			t.controlBuf.put(&windowUpdate{0, w})
+		}
 		return
 	}
-	size := len(f.Data())
 	if size > 0 {
-		if err := s.fc.onData(uint32(size)); err != nil {
-			if _, ok := err.(ConnectionError); ok {
-				grpclog.Printf("transport: http2Server %v", err)
-				t.Close()
-				return
+		s.mu.Lock()
+		if s.state == streamDone {
+			s.mu.Unlock()
+			// The stream has been closed. Release the corresponding quota.
+			if w := t.fc.onRead(uint32(size)); w > 0 {
+				t.controlBuf.put(&windowUpdate{0, w})
 			}
+			return
+		}
+		if err := s.fc.onData(uint32(size)); err != nil {
+			s.mu.Unlock()
 			t.closeStream(s)
 			t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
 			return
 		}
+		s.mu.Unlock()
 		// TODO(bradfitz, zhaoq): A copy is required here because there is no
 		// guarantee f.Data() is consumed before the arrival of next frame.
 		// Can this copy be eliminated?
@@ -446,6 +460,10 @@
 		t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-encoding", Value: s.sendCompress})
 	}
 	for k, v := range md {
+		if isReservedHeader(k) {
+			// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
+			continue
+		}
 		for _, entry := range v {
 			t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
 		}
@@ -488,6 +506,10 @@
 	t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: statusDesc})
 	// Attach the trailer metadata.
 	for k, v := range s.trailer {
+		// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
+		if isReservedHeader(k) {
+			continue
+		}
 		for _, entry := range v {
 			t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
 		}
@@ -507,6 +529,10 @@
 	// TODO(zhaoq): Support multi-writers for a single stream.
 	var writeHeaderFrame bool
 	s.mu.Lock()
+	if s.state == streamDone {
+		s.mu.Unlock()
+		return StreamErrorf(codes.Unknown, "the stream has been done")
+	}
 	if !s.headerOk {
 		writeHeaderFrame = true
 		s.headerOk = true
@@ -574,6 +600,10 @@
 		// Got some quota. Try to acquire writing privilege on the
 		// transport.
 		if _, err := wait(s.ctx, t.shutdownChan, t.writableChan); err != nil {
+			if _, ok := err.(StreamError); ok {
+				// Return the connection quota back.
+				t.sendQuotaPool.add(ps)
+			}
 			if t.framer.adjustNumWriters(-1) == 0 {
 				// This writer is the last one in this batch and has the
 				// responsibility to flush the buffered frames. It queues
@@ -583,6 +613,16 @@
 			}
 			return err
 		}
+		select {
+		case <-s.ctx.Done():
+			t.sendQuotaPool.add(ps)
+			if t.framer.adjustNumWriters(-1) == 0 {
+				t.controlBuf.put(&flushIO{})
+			}
+			t.writableChan <- 0
+			return ContextErr(s.ctx.Err())
+		default:
+		}
 		var forceFlush bool
 		if r.Len() == 0 && t.framer.adjustNumWriters(0) == 1 && !opts.Last {
 			forceFlush = true
@@ -680,20 +720,22 @@
 	t.mu.Lock()
 	delete(t.activeStreams, s.id)
 	t.mu.Unlock()
-	if q := s.fc.restoreConn(); q > 0 {
-		t.controlBuf.put(&windowUpdate{0, q})
-	}
+	// In case stream sending and receiving are invoked in separate
+	// goroutines (e.g., bi-directional streaming), cancel needs to be
+	// called to interrupt the potential blocking on other goroutines.
+	s.cancel()
 	s.mu.Lock()
+	if q := s.fc.resetPendingData(); q > 0 {
+		if w := t.fc.onRead(q); w > 0 {
+			t.controlBuf.put(&windowUpdate{0, w})
+		}
+	}
 	if s.state == streamDone {
 		s.mu.Unlock()
 		return
 	}
 	s.state = streamDone
 	s.mu.Unlock()
-	// In case stream sending and receiving are invoked in separate
-	// goroutines (e.g., bi-directional streaming), cancel needs to be
-	// called to interrupt the potential blocking on other goroutines.
-	s.cancel()
 }
 
 func (t *http2Server) RemoteAddr() net.Addr {
diff --git a/go/src/google.golang.org/grpc/transport/http_util.go b/go/src/google.golang.org/grpc/transport/http_util.go
index 7a3594a..f2e23dc 100644
--- a/go/src/google.golang.org/grpc/transport/http_util.go
+++ b/go/src/google.golang.org/grpc/transport/http_util.go
@@ -127,16 +127,40 @@
 	}
 }
 
+// isWhitelistedPseudoHeader checks whether hdr belongs to HTTP2 pseudoheaders
+// that should be propagated into metadata visible to users.
+func isWhitelistedPseudoHeader(hdr string) bool {
+	switch hdr {
+	case ":authority":
+		return true
+	default:
+		return false
+	}
+}
+
 func (d *decodeState) setErr(err error) {
 	if d.err == nil {
 		d.err = err
 	}
 }
 
+func validContentType(t string) bool {
+	e := "application/grpc"
+	if !strings.HasPrefix(t, e) {
+		return false
+	}
+	// Support variations on the content-type
+	// (e.g. "application/grpc+blah", "application/grpc;blah").
+	if len(t) > len(e) && t[len(e)] != '+' && t[len(e)] != ';' {
+		return false
+	}
+	return true
+}
+
 func (d *decodeState) processHeaderField(f hpack.HeaderField) {
 	switch f.Name {
 	case "content-type":
-		if !strings.Contains(f.Value, "application/grpc") {
+		if !validContentType(f.Value) {
 			d.setErr(StreamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value))
 			return
 		}
@@ -162,7 +186,7 @@
 	case ":path":
 		d.method = f.Value
 	default:
-		if !isReservedHeader(f.Name) {
+		if !isReservedHeader(f.Name) || isWhitelistedPseudoHeader(f.Name) {
 			if f.Name == "user-agent" {
 				i := strings.LastIndex(f.Value, " ")
 				if i == -1 {
diff --git a/go/src/google.golang.org/grpc/transport/http_util_test.go b/go/src/google.golang.org/grpc/transport/http_util_test.go
index b5b18bf..279acbc 100644
--- a/go/src/google.golang.org/grpc/transport/http_util_test.go
+++ b/go/src/google.golang.org/grpc/transport/http_util_test.go
@@ -85,3 +85,25 @@
 		}
 	}
 }
+
+func TestValidContentType(t *testing.T) {
+	tests := []struct {
+		h    string
+		want bool
+	}{
+		{"application/grpc", true},
+		{"application/grpc+", true},
+		{"application/grpc+blah", true},
+		{"application/grpc;", true},
+		{"application/grpc;blah", true},
+		{"application/grpcd", false},
+		{"application/grpd", false},
+		{"application/grp", false},
+	}
+	for _, tt := range tests {
+		got := validContentType(tt.h)
+		if got != tt.want {
+			t.Errorf("validContentType(%q) = %v; want %v", tt.h, got, tt.want)
+		}
+	}
+}
diff --git a/go/src/google.golang.org/grpc/transport/transport.go b/go/src/google.golang.org/grpc/transport/transport.go
index 87fdf53..1e9d0c0 100644
--- a/go/src/google.golang.org/grpc/transport/transport.go
+++ b/go/src/google.golang.org/grpc/transport/transport.go
@@ -321,6 +321,7 @@
 	reachable transportState = iota
 	unreachable
 	closing
+	draining
 )
 
 // NewServerTransport creates a ServerTransport with conn or non-nil error
@@ -337,7 +338,7 @@
 	Dialer func(string, time.Duration) (net.Conn, error)
 	// AuthOptions stores the credentials required to setup a client connection and/or issue RPCs.
 	AuthOptions []credentials.Credentials
-	// Timeout specifies the timeout for dialing a client connection.
+	// Timeout specifies the timeout for dialing a ClientTransport.
 	Timeout time.Duration
 }
 
@@ -391,6 +392,10 @@
 	// is called only once.
 	Close() error
 
+	// GracefulClose starts to tear down the transport. It stops accepting
+	// new RPCs and wait the completion of the pending RPCs.
+	GracefulClose() error
+
 	// Write sends the data for the given stream. A nil stream indicates
 	// the write is to be performed on the transport as a whole.
 	Write(s *Stream, data []byte, opts *Options) error
diff --git a/go/src/google.golang.org/grpc/transport/transport_test.go b/go/src/google.golang.org/grpc/transport/transport_test.go
index c9a9532..6ebec45 100644
--- a/go/src/google.golang.org/grpc/transport/transport_test.go
+++ b/go/src/google.golang.org/grpc/transport/transport_test.go
@@ -331,19 +331,17 @@
 			defer wg.Done()
 			s, err := ct.NewStream(context.Background(), callHdr)
 			if err != nil {
-				t.Errorf("failed to open stream: %v", err)
+				t.Errorf("%v.NewStream(_, _) = _, %v, want _, <nil>", ct, err)
 			}
 			if err := ct.Write(s, expectedRequestLarge, &Options{Last: true, Delay: false}); err != nil {
-				t.Errorf("failed to send data: %v", err)
+				t.Errorf("%v.Write(_, _, _) = %v, want  <nil>", ct, err)
 			}
 			p := make([]byte, len(expectedResponseLarge))
-			_, recvErr := io.ReadFull(s, p)
-			if recvErr != nil || !bytes.Equal(p, expectedResponseLarge) {
-				t.Errorf("Error: %v, want <nil>; Result len: %d, want len %d", recvErr, len(p), len(expectedResponseLarge))
+			if _, err := io.ReadFull(s, p); err != nil || !bytes.Equal(p, expectedResponseLarge) {
+				t.Errorf("io.ReadFull(_, %v) = _, %v, want %v, <nil>", err, p, expectedResponse)
 			}
-			_, recvErr = io.ReadFull(s, p)
-			if recvErr != io.EOF {
-				t.Errorf("Error: %v; want <EOF>", recvErr)
+			if _, err = io.ReadFull(s, p); err != io.EOF {
+				t.Errorf("Failed to complete the stream %v; want <EOF>", err)
 			}
 		}()
 	}
@@ -352,6 +350,50 @@
 	server.stop()
 }
 
+func TestGracefulClose(t *testing.T) {
+	server, ct := setUp(t, 0, math.MaxUint32, normal)
+	callHdr := &CallHdr{
+		Host:   "localhost",
+		Method: "foo.Small",
+	}
+	s, err := ct.NewStream(context.Background(), callHdr)
+	if err != nil {
+		t.Fatalf("%v.NewStream(_, _) = _, %v, want _, <nil>", ct, err)
+	}
+	if err = ct.GracefulClose(); err != nil {
+		t.Fatalf("%v.GracefulClose() = %v, want <nil>", ct, err)
+	}
+	var wg sync.WaitGroup
+	// Expect the failure for all the follow-up streams because ct has been closed gracefully.
+	for i := 0; i < 100; i++ {
+		wg.Add(1)
+		go func() {
+			defer wg.Done()
+			if _, err := ct.NewStream(context.Background(), callHdr); err != ErrConnClosing {
+				t.Errorf("%v.NewStream(_, _) = _, %v, want _, %v", err, ErrConnClosing)
+			}
+		}()
+	}
+	opts := Options{
+		Last:  true,
+		Delay: false,
+	}
+	// The stream which was created before graceful close can still proceed.
+	if err := ct.Write(s, expectedRequest, &opts); err != nil {
+		t.Fatalf("%v.Write(_, _, _) = %v, want  <nil>", ct, err)
+	}
+	p := make([]byte, len(expectedResponse))
+	if _, err := io.ReadFull(s, p); err != nil || !bytes.Equal(p, expectedResponse) {
+		t.Fatalf("io.ReadFull(_, %v) = _, %v, want %v, <nil>", err, p, expectedResponse)
+	}
+	if _, err = io.ReadFull(s, p); err != io.EOF {
+		t.Fatalf("Failed to complete the stream %v; want <EOF>", err)
+	}
+	wg.Wait()
+	ct.Close()
+	server.stop()
+}
+
 func TestLargeMessageSuspension(t *testing.T) {
 	server, ct := setUp(t, 0, math.MaxUint32, suspended)
 	callHdr := &CallHdr{
@@ -584,8 +626,8 @@
 		t.Fatalf("%v got err %v with statusCode %d, want err <EOF> with statusCode %d", s, err, s.statusCode, code)
 	}
 
-	if ss.fc.pendingData != 0 || ss.fc.pendingUpdate != 0 || sc.fc.pendingData != 0 || sc.fc.pendingUpdate != initialWindowSize {
-		t.Fatalf("Server mistakenly resets inbound flow control params: got %d, %d, %d, %d; want 0, 0, 0, %d", ss.fc.pendingData, ss.fc.pendingUpdate, sc.fc.pendingData, sc.fc.pendingUpdate, initialWindowSize)
+	if ss.fc.pendingData != 0 || ss.fc.pendingUpdate != 0 || sc.fc.pendingData != 0 || sc.fc.pendingUpdate <= initialWindowSize {
+		t.Fatalf("Server mistakenly resets inbound flow control params: got %d, %d, %d, %d; want 0, 0, 0, >%d", ss.fc.pendingData, ss.fc.pendingUpdate, sc.fc.pendingData, sc.fc.pendingUpdate, initialWindowSize)
 	}
 	ct.CloseStream(s, nil)
 	// Test server behavior for violation of connection flow control window size restriction.
@@ -631,15 +673,15 @@
 			break
 		}
 	}
-	if s.fc.pendingData != initialWindowSize || s.fc.pendingUpdate != 0 || conn.fc.pendingData != initialWindowSize || conn.fc.pendingUpdate != 0 {
-		t.Fatalf("Client mistakenly updates inbound flow control params: got %d, %d, %d, %d; want %d, %d, %d, %d", s.fc.pendingData, s.fc.pendingUpdate, conn.fc.pendingData, conn.fc.pendingUpdate, initialWindowSize, 0, initialWindowSize, 0)
+	if s.fc.pendingData <= initialWindowSize || s.fc.pendingUpdate != 0 || conn.fc.pendingData <= initialWindowSize || conn.fc.pendingUpdate != 0 {
+		t.Fatalf("Client mistakenly updates inbound flow control params: got %d, %d, %d, %d; want >%d, %d, >%d, %d", s.fc.pendingData, s.fc.pendingUpdate, conn.fc.pendingData, conn.fc.pendingUpdate, initialWindowSize, 0, initialWindowSize, 0)
 	}
 	if err != io.EOF || s.statusCode != codes.Internal {
 		t.Fatalf("Got err %v and the status code %d, want <EOF> and the code %d", err, s.statusCode, codes.Internal)
 	}
 	conn.CloseStream(s, err)
-	if s.fc.pendingData != 0 || s.fc.pendingUpdate != 0 || conn.fc.pendingData != 0 || conn.fc.pendingUpdate != initialWindowSize {
-		t.Fatalf("Client mistakenly resets inbound flow control params: got %d, %d, %d, %d; want 0, 0, 0, %d", s.fc.pendingData, s.fc.pendingUpdate, conn.fc.pendingData, conn.fc.pendingUpdate, initialWindowSize)
+	if s.fc.pendingData != 0 || s.fc.pendingUpdate != 0 || conn.fc.pendingData != 0 || conn.fc.pendingUpdate <= initialWindowSize {
+		t.Fatalf("Client mistakenly resets inbound flow control params: got %d, %d, %d, %d; want 0, 0, 0, >%d", s.fc.pendingData, s.fc.pendingUpdate, conn.fc.pendingData, conn.fc.pendingUpdate, initialWindowSize)
 	}
 	// Test the logic for the violation of the connection flow control window size restriction.
 	//