trait BackupClientInterface[T <: KafkaConsumerInterface] extends LazyLogging
An interface for a template on how to backup a Kafka Stream into some data storage
- T
The underlying
kafkaClientInterface
type
- Alphabetic
- By Inheritance
- BackupClientInterface
- LazyLogging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- abstract type BackupResult
Override this type to define the result of backing up data to a datasource
- case class PreviousState(stateDetails: StateDetails, previousKey: String) extends Product with Serializable
- abstract type State
Override this type to define the result of calculating the previous state (if it exists)
- case class StateDetails(state: State, backupObjectMetadata: BackupObjectMetadata) extends Product with Serializable
- case class UploadStateResult(current: Option[StateDetails], previous: Option[PreviousState]) extends Product with Serializable
Abstract Value Members
- implicit abstract val backupConfig: Backup
- abstract def backupToStorageSink(key: String, currentState: Option[State]): Sink[(ByteString, T.CursorContext), Future[BackupResult]]
Override this method to define how to backup a pekko.util.ByteString combined with Kafka
kafkaClientInterface.CursorContext
to aDataSource
Override this method to define how to backup a pekko.util.ByteString combined with Kafka
kafkaClientInterface.CursorContext
to aDataSource
- key
The object key or filename for what is being backed up
- currentState
The current state if it exists. If this is empty then a new backup is being created with the associated
key
otherwise if this contains a State then the defined pekko.stream.scaladsl.Sink needs to handle resuming a previously unfinished backup with thatkey
by directly appending the pekko.util.ByteString data.- returns
A pekko.stream.scaladsl.Sink that given a pekko.util.ByteString (containing a single Kafka io.aiven.guardian.kafka.models.ReducedConsumerRecord) along with its
kafkaClientInterface.CursorContext
backs up the data to your data storage. The pekko.stream.scaladsl.Sink is also responsible for executingkafkaClientInterface.commitCursor
when the data is successfully backed up
- abstract def backupToStorageTerminateSink(previousState: PreviousState): Sink[ByteString, Future[BackupResult]]
A sink that is executed whenever a previously existing Backup needs to be terminated and closed.
A sink that is executed whenever a previously existing Backup needs to be terminated and closed. Generally speaking this pekko.stream.scaladsl.Sink is similar to the
backupToStorageSink
except thatkafkaClientInterface.CursorContext
is not required since no Kafka messages are being written.Note that the terminate refers to the fact that this Sink is executed with a
null]
pekko.stream.scaladsl.Source which when written to an already existing unfinished backup terminates the containing JSON array so that it becomes valid parsable JSON.- previousState
A data structure containing both the State along with the associated key which you can refer to in order to define your pekko.stream.scaladsl.Sink
- returns
A pekko.stream.scaladsl.Sink that points to an existing key defined by
previousState.previousKey
- abstract def empty: () => Future[BackupResult]
Override this method to define a zero vale that covers the case that occurs immediately when
SubFlow
has been split atBackupStreamPosition.Start
.Override this method to define a zero vale that covers the case that occurs immediately when
SubFlow
has been split atBackupStreamPosition.Start
. If you have difficulties defining an empty value forBackupResult
then you can wrap it in anOption
and just useNone
for the empty case- returns
An "empty" value that is used when a split
SubFlow
has just started
- abstract def getCurrentUploadState(key: String): Future[UploadStateResult]
Override this method to define how to retrieve the current state of any unfinished backups.
Override this method to define how to retrieve the current state of any unfinished backups.
- key
The object key or filename for what is currently being backed up
- returns
A scala.concurrent.Future with a UploadStateResult data structure that optionally contains the state associated with
key
along with the previous latest state beforekey
(if it exists)
- implicit abstract val kafkaClientInterface: T
- implicit abstract val system: ActorSystem
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def backup: RunnableGraph[T.MatCombineResult]
The entire flow that involves reading from Kafka, transforming the data into JSON and then backing it up into a data source.
The entire flow that involves reading from Kafka, transforming the data into JSON and then backing it up into a data source.
- returns
The
KafkaClientInterface.Control
which depends on the implementation ofT
(typically this is used to control the underlying stream).
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @HotSpotIntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- lazy val logger: Logger
- Attributes
- protected
- Definition Classes
- LazyLogging
- Annotations
- @transient()
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- object UploadStateResult extends Serializable