class BackupClient[T <: KafkaConsumerInterface] extends BackupClientInterface[T] with LazyLogging
- Alphabetic
- By Inheritance
- BackupClient
- BackupClientInterface
- LazyLogging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new BackupClient(maybeS3Settings: Option[S3Settings])(implicit kafkaClientInterface: T, backupConfig: Backup, system: ActorSystem, s3Config: S3, s3Headers: S3Headers)
Type Members
- case class PreviousState(stateDetails: StateDetails, previousKey: String) extends Product with Serializable
- Definition Classes
- BackupClientInterface
- case class StateDetails(state: State, backupObjectMetadata: BackupObjectMetadata) extends Product with Serializable
- Definition Classes
- BackupClientInterface
- case class UploadStateResult(current: Option[StateDetails], previous: Option[PreviousState]) extends Product with Serializable
- Definition Classes
- BackupClientInterface
- type BackupResult = Option[MultipartUploadResult]
Override this type to define the result of backing up data to a datasource
Override this type to define the result of backing up data to a datasource
- Definition Classes
- BackupClient → BackupClientInterface
- type State = CurrentS3State
Override this type to define the result of calculating the previous state (if it exists)
Override this type to define the result of calculating the previous state (if it exists)
- Definition Classes
- BackupClient → BackupClientInterface
Value Members
- object UploadStateResult extends Serializable
- Definition Classes
- BackupClientInterface
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def backup: RunnableGraph[T.MatCombineResult]
The entire flow that involves reading from Kafka, transforming the data into JSON and then backing it up into a data source.
The entire flow that involves reading from Kafka, transforming the data into JSON and then backing it up into a data source.
- returns
The
KafkaClientInterface.Control
which depends on the implementation ofT
(typically this is used to control the underlying stream).
- Definition Classes
- BackupClientInterface
- implicit val backupConfig: Backup
- Definition Classes
- BackupClient → BackupClientInterface
- def backupToStorageSink(key: String, currentState: Option[CurrentS3State]): Sink[(ByteString, T.CursorContext), Future[BackupResult]]
Override this method to define how to backup a pekko.util.ByteString combined with Kafka
kafkaClientInterface.CursorContext
to aDataSource
Override this method to define how to backup a pekko.util.ByteString combined with Kafka
kafkaClientInterface.CursorContext
to aDataSource
- key
The object key or filename for what is being backed up
- currentState
The current state if it exists. If this is empty then a new backup is being created with the associated
key
otherwise if this contains a State then the defined pekko.stream.scaladsl.Sink needs to handle resuming a previously unfinished backup with thatkey
by directly appending the pekko.util.ByteString data.- returns
A pekko.stream.scaladsl.Sink that given a pekko.util.ByteString (containing a single Kafka io.aiven.guardian.kafka.models.ReducedConsumerRecord) along with its
kafkaClientInterface.CursorContext
backs up the data to your data storage. The pekko.stream.scaladsl.Sink is also responsible for executingkafkaClientInterface.commitCursor
when the data is successfully backed up
- Definition Classes
- BackupClient → BackupClientInterface
- def backupToStorageTerminateSink(previousState: PreviousState): Sink[ByteString, Future[BackupResult]]
A sink that is executed whenever a previously existing Backup needs to be terminated and closed.
A sink that is executed whenever a previously existing Backup needs to be terminated and closed. Generally speaking this pekko.stream.scaladsl.Sink is similar to the
backupToStorageSink
except thatkafkaClientInterface.CursorContext
is not required since no Kafka messages are being written.Note that the terminate refers to the fact that this Sink is executed with a
null]
pekko.stream.scaladsl.Source which when written to an already existing unfinished backup terminates the containing JSON array so that it becomes valid parsable JSON.- previousState
A data structure containing both the State along with the associated key which you can refer to in order to define your pekko.stream.scaladsl.Sink
- returns
A pekko.stream.scaladsl.Sink that points to an existing key defined by
previousState.previousKey
- Definition Classes
- BackupClient → BackupClientInterface
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @HotSpotIntrinsicCandidate() @native()
- def empty: () => Future[Option[MultipartUploadResult]]
Override this method to define a zero vale that covers the case that occurs immediately when
SubFlow
has been split atBackupStreamPosition.Start
.Override this method to define a zero vale that covers the case that occurs immediately when
SubFlow
has been split atBackupStreamPosition.Start
. If you have difficulties defining an empty value forBackupResult
then you can wrap it in anOption
and just useNone
for the empty case- returns
An "empty" value that is used when a split
SubFlow
has just started
- Definition Classes
- BackupClient → BackupClientInterface
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def getCurrentUploadState(key: String): Future[UploadStateResult]
Override this method to define how to retrieve the current state of any unfinished backups.
Override this method to define how to retrieve the current state of any unfinished backups.
- key
The object key or filename for what is currently being backed up
- returns
A scala.concurrent.Future with a UploadStateResult data structure that optionally contains the state associated with
key
along with the previous latest state beforekey
(if it exists)
- Definition Classes
- BackupClient → BackupClientInterface
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- implicit val kafkaClientInterface: T
- Definition Classes
- BackupClient → BackupClientInterface
- lazy val logger: Logger
- Attributes
- protected
- Definition Classes
- LazyLogging
- Annotations
- @transient()
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- implicit val system: ActorSystem
- Definition Classes
- BackupClient → BackupClientInterface
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])