Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I have developed a single server/multiple client TCP Application.

The client consists of x number of threads each thread doing processing on its own data and then sending the data over TCP socket to the Server for displaying.

The Server is basically a GUI having a window. Server receves data from the client and displays it.

Now, the problem is that since there are 40 threads inside the client and each thread wants to send data, how can I achieve this using one connected socket?

My Suggestion:

My approach was to create a data structure inside each of the 40 threads in which data to be sent will be maintained. A separate Send Thread with one connected socket on client side is then created. This thread will read data from data structure of first thread, send it over the socket and then read the data from second thread and so on.

Confusions:

but I am not sure how would this be implemented as I am new to all this? :( What if a thread is writing to data structure and the Send Thread tries to read the data at the same time. I am familiar with mutex, critical section etc but that sounds too complex for my simple application.

Any other suggestions/comments other than my own suggestion are welcome. If you think my own approach is correct then please help me solving my confusions that I mentioned above.

Thanks a lot in advance :)

Edit:

Can I put I timer on Send Thread and after a specific time the Send Thread suspends thread#1(so that it can access its data structure without any synchronization issues), reads data from its data structure, sends it over the tcp Socket, and resumes Thread#1 back, then it suspends Thread#2, reads data from its data structure, sends it over the tcp Socket, and resumes Thread#2 back and so on.

share|improve this question
    
do you care about occasional data loss if you are just updating the server GUI ? the nature of your application may not, if so consider UDP instead of TCP to avoid all the synchronization / mutex builds; also in that case you're limited to a single host –  Ahmed Masud May 30 '13 at 11:43

3 Answers 3

up vote 1 down vote accepted

A common approach is to have one thread dedicated to sending the data. The other threads post their data into a shared container (list, deque, etc) and signal the sender thread that data is available. The sender then wakes up and processes whatever data is available.

EDIT:

The gist of it is as follows:

HANDLE data_available_event; // manual reset event; set when queue has data, clear when queue is empty
CRITICAL_SECTION cs; // protect access to data queue
std::deque<std::string> data_to_send;

WorkerThread()
{
    while(do_work)
    {
        std::string data = generate_data()
        EnterCriticalSection(&cs);
        data_to_send.push_back(data);
        SetEvent(data_available_event); // signal sender thread that data is available
        LeaveCriticalSection(&cs);
    }
}

SenderThread()
{
    while(do_work)
    {
        WaitForSingleObject(data_available_event);
        EnterCriticalSection(&cs);
        std::string data = data_to_send.front();
        data_to_send.pop_front();
        if(data_to_send.empty())
        {
            ResetEvent(data_available_event); // queue is empty; reset event and wait until more data is available
        }
        LeaveCriticalSection(&cs);
        send_data(data);
    }
}

This is of course assuming the data can be sent in any order. I use strings only for illustrative purposes; you probably want some kind of custom object that knows how to serialize the data it holds.

share|improve this answer
    
+1 for the post, looks a good idea, but he problem is that all my 40 threads are in while(1) loop continuously generating data to be sent to the server for being displayed so I nobody knows when data is written by all the 40 to the shared memory. please see the edit that I just added –  Ayse May 30 '13 at 11:30
    
You can protect access to the container with a critical section. –  Luke May 30 '13 at 14:00
    
Thank you so much for the edit. looks really helpful. But I have little confusion about converting it to x worker threads. What happens when Worker Thread 1 has done pricessing and now writing to critical section and at the same time thread 2 tries to do so. Will the request for critical section made by thread 2 be queued up and as soon as thread 1 frees the critical section, it is giben to thread 2? –  Ayse May 31 '13 at 6:30
1  
Yes, EnterCriticalSections() blocks other threads until the thread that owns the critical section calls LeaveCriticalSection(). –  Luke May 31 '13 at 14:16
    
When EnterCriticalSection() is called, then access to ALL global variables is restricted to the thread that entered Critical Section or only the variables that we want to protect are owned by critical section? –  Ayse Jun 1 '13 at 5:13

Suspending thread#1 so you can access its data strcuture does not avoid synchronization issues. When you suspend it thread#1 could be in the midst of an update to the data, so the socket thread gets part of old data, part of new. That is data corruption.

You need a shared data structure such as a FIFO queue. The worker threads add to the queue, the socket thread removes the oldest item from the queue. All access to this shared queue must be protected with a critical section unless you implement a lock-free queue. (A circular buffer.)

Depending on your application needs, if you implement this queue you might not need the socket thread at all. Just do the dequeueing in the display thread.

share|improve this answer
    
+1 for the answer :) Okay, according to my understanding there is a shared queue shared between all 'x' worker threads. One thread can access the queue at a time and write to the queue. After some time period, the display thread enters the critical section, displays the data and leaves the critical section. The worker thread once again starts accessing the shared queue one by one. right? –  Ayse May 31 '13 at 6:45

There are a couple of ways to achieving it; Luke's idea suffers from race conditions that will still create data corruption

You avoid that by using UDP instead of TCP as the transport protocol. It'd be especially a good choice if you don't mind missing an occasional packet (which is okay for displaying rapidly changing data); it's fantastic for ensuring real-time updates on data where exact history doesn't matter (missing a point in a relatively smooth curve while plotting graphs is okay);

If the data packets are are small and sort of represent a stream then UDP is a great choice. Its benefit increases if you have multiple senders on different systems all displaying on a single screen.

share|improve this answer
    
I can't afford packet losses + I am working on a wireless netwrok in a hilly area where there are chances of packet loss so I would prefer using TCP. –  Ayse May 31 '13 at 5:05
    
+ the problem still persists when using UDP that not all the threads can send data simultaneously on a single Socket descriptor. And if I make a separate thread for Sending data then there is sunchronization issue if sending thread tries to read data while the processing thread is writing data at the same time –  Ayse May 31 '13 at 5:06
    
@AyeshaHassan Aah you are working on a very specific problem; well I have a 15 years of experience building realtime wireless protocols for applications for mountainous terrains. TCP is not necessarily a good choice because you don't have control over the 'real-timedness' of the application there. The engineering choices are varied. This may be off topic here, so do you want to discuss this offline? –  Ahmed Masud Jun 1 '13 at 14:09

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.