Processing a continuous stream of data

Gustavo_Brian
Level 2
Processing a continuous stream of data

Hi,



I have a service that continuously emits data. You start receiving data once you have connected opening a TCP connection and never stops until you terminate the connection.



I'd like to develop a custom plugin to be able to process that data on Dataiku how I can do that as data never ends?



Will "build" log overload the server?



Thanks



 



UPDATE:



We are loading data from a flight's metasearch service. They expose a data stream we consume polling from a TCP connection (https://github.com/gbrian/Flightmate-Stream). We plan to use Dataiku to parse, sanitize, ... data and the drop into Hadoop apart from applying the corresponding analysis and lab ๐Ÿ˜‰



@alexander Hope this helps

0 Kudos
3 Replies
Alex_Combessie
Dataiker Alumni
Hi Gustavo,

For this type of use case, we would advise performing the data ingestion outside of Dataiku DSS, with a streaming engine such as Flume or Kafka.

Once the data is ingested, you can perform data transformation and machine learning modelling in DSS in a micro-batch way, using partitions to avoid recomputing on the whole data: https://doc.dataiku.com/dss/latest/partitions/index.html

Cheers,

Alex
0 Kudos
swarna
Level 2

Hi @Alex_Combessie,

Can you tell me how to connect to kafka in dataiku?

 

 

Thanks,

Swarna

0 Kudos
ThayerA
Dataiker Alumni

Hi Swarna,

I am not a specialist on the topic but here is the link within our documentation regarding Kafka.  https://doc.dataiku.com/dss/latest/streaming/kafka.html#connection-setup

 

0 Kudos