• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Playing with iptables: for FAH.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

ShadowPho

Member
Joined
Jun 8, 2005
Location
I am in your stack, SUBbing your registers!
Gentoo 2.6.20 patched with kerrighed 2.3.0.

Now, this isn't for me, but for another member who needs help in this. Now, he got a cluster of PCs and he got a FAH client on all of them. He wants to compress all data going in and out of FAH.

I was thinking among the lines of least resistance: two iptable entries, one routing all FAH sends to another port where a program encodes it. Another entry routes all FAH receives to a decoder which then passes it to FAH.

Will the following solution work?

PseudoCode
Code:
 int main() 
 { 
 //register ports 
  
 char *buffer = (char*) malloc(sizeof(char)*4096); 
  
 while(true) 
      { 
           //if recieve data on port 1 
           //then 
           sendData(port2,(int)encode(buffer), buffer); 
           //encode(buffer) returns number of ints after encoding 
            
            
           //if recieves data on port 2 
           //then 
           sendData(port1,(int)decode(buffer), bufffer); 
           // will have to do some playing around with messages split.
      } 
  
  
 }

And here is a diagram:
 

Attachments

  • FAH_hack.JPG
    FAH_hack.JPG
    18.5 KB · Views: 397
Great news! Fah doesn't have set ports! So either compress all data for all ports on lo connection or find a way to just compress data from FAH via a different method than ports..
 
oh wow i love network programming in java
The purpose of this is to have more bandwidth throughput with minimum CPU and RAM usage. While Java would certainly be a nice easy solution, it does not offer the best choice here as we would need to decode upwards of 350MB/s.... so C++ or even C.

Great news! Fah doesn't have set ports! So either compress all data for all ports on lo connection or find a way to just compress data from FAH via a different method than ports..

Simply means that we might have to do a simple script that monitors which port FAH first tries to connect on and then set iptables to redirect that port... That is unless it sets a new port everytime, or I am misunderstanding something.

Well, maybe we SHOULD look into a custom kernel solution...hehehe
 
Your saying you need to decode 350MB/second?

Well if speed is what your after, C++ isn't it, go with C.

Java is very fast and well optimized for networking.

What I like to do is write up the solution in Java which takes about 5 mins, then look at how well it performs, if you see it not performing well then look into doing it all in C.
But C++ is just making it harder with no speed advantages.

So I wouldn't toss java out all together because you think its slow, spend 5 mins looking up how to do networking in java (literally thats all its going to take) and write up the protocol you've drawn above in java and give it a run. Then look at how well its working. Almost always when I worked at IBM they would do all the networking in Java and these are applications that they sell to customers for millions that are transmitting and encrypting tons of information all in java. The client/server doesn't all have to be in java either, it doesn't care what the other end is written in, you just can't send whole objects over the line like you can with a Java-Java client/server application (that gets really fun and powerful when you can though)
 
Last edited:
Your saying you need to decode 350MB/second?

Well if speed is what your after, C++ isn't it, go with C.

Java is very fast and well optimized for networking.

What I like to do is write up the solution in Java which takes about 5 mins, then look at how well it performs, if you see it not performing well then look into doing it all in C.
But C++ is just making it harder with no speed advantages.

So I wouldn't toss java out all together because you think its slow, spend 5 mins looking up how to do networking in java (literally thats all its going to take) and write up the protocol you've drawn above in java and give it a run. Then look at how well its working. Almost always when I worked at IBM they would do all the networking in Java and these are applications that they sell to customers for millions that are transmitting and encrypting tons of information all in java. The client/server doesn't all have to be in java either, it doesn't care what the other end is written in, you just can't send whole objects over the line like you can with a Java-Java client/server application (that gets really fun and powerful when you can though)

C it is then.....
2MBs, 1 sec. Cuts down to ....23KB?????!?!??!!???!?
16MBs, 8 sec. Cuts down to 25kb. heh.
32, 17 sec, Cuts down to....28kb.... Looks like my file was too predictable.

Tomorrow I might try Java, and a normal random file generator like this....

Generate A Random File.
Feed that into LMZA
Decrypt
Compare output to random file.

WOuld be much bettter than what I have...but I am sleepy.

Code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>//testing

#include "LzmaLib.h"

char *usrInp;
char *buffer;
unsigned char *props;
int result;
long startTime;
size_t one[4];
int sizeOfIt;
FILE *in;
FILE *out;
FILE *tempF;
long sizeOfFile;

int main(int argc,char* argv[])
{
//system("dir\n");
printf("Hello\n");
in = fopen("in.txt", "rb");
out = fopen("out.txt", "wb");
tempF=fopen("mid.txt","wb");
 if (in==NULL||out==NULL)
 {fputs("OH GOD. FAIL. CANT OPEN FILE.",stdout);return 0;}

fseek (in , 0 , SEEK_END);

 sizeOfFile = ftell (in);


printf("Allocating Memory...");


usrInp=(char*) malloc(sizeOfFile+1); // this will get removed later.
buffer=(char*) malloc(sizeOfFile+1);
props=(char*) malloc(16*sizeof(char));

if (0==( usrInp!=NULL && NULL!=buffer && NULL!=props))
{printf("no mem. FAIL FAIL FAIL");return 0;}

printf("Done\nReading files....Size of file:%i ",sizeOfFile);
 rewind (in);

result=fread(usrInp,sizeOfFile,1,in);


printf("Done\nCode Value:%i\nError Value:%i\nPrepared to proccess.\n",result,ferror(in));
//usrInp=fgets(usrInp, 512, stdin);

printf("\n Press enter when ready.");while(getchar()!='\n');

printf("encoding...");
startTime=(long)clock();
one[0]=sizeOfFile;
one[1]=sizeOfFile;
one[2]=5;
sizeOfIt=5;//props, sorry


 result=LzmaCompress(buffer, &one[0], usrInp, one[1],//assume sizeof(char)==1....
  props, &sizeOfIt, /* *outPropsSize must be = 5 */
 5,      /* 0 <= level <= 9, default = 5 */
  1 << 24,  /* default = (1 << 24) */
  3,        /* 0 <= lc <= 8, default = 3  */ 
  0,        /* 0 <= lp <= 4, default = 0  */ 
  2,        /* 0 <= pb <= 4, default = 2  */ 
  32,        /* 5 <= fb <= 273, default = 32 */
  2 /* 1 or 2, default = 2 */
  );
 printf("RESULT:%i \n\n",result,buffer);


result=LzmaUncompress(usrInp, &one[0], buffer, &one[1], 
  props, one[2]);

printf("RESULT:%i \n\n",result,usrInp);


printf("ENCODED AND DECODED. TOOK %f SECONDS\n",(double)((double)clock()-startTime)/(double)CLOCKS_PER_SEC);
fwrite(usrInp,sizeOfFile,1,out);
fwrite(buffer,sizeOfFile,1,tempF);
printf("\n Now press Enter");

while(getchar()!='\n');

free(usrInp);
free(buffer);
free(props);
fclose(in);
fclose(out);
fclose(tempF);

return 0;
}
 
I tried to compare it with Java.
Now, either I am doing something completely off, or LMZA's library with Java is much slower. Or C is that much faster.
54 MB's provided by Shelnutt.

C------------Java
23Mb--------23Mb----Compacted_File
26Secs------85secs---Time_To_Encode
~96Mbs-----~210MBs---Memory_Used

I attempted to make all the settings as close as possible. For C I used Visual-C++ 08, set it to C. For Java I used Eclipse.
So far I see C as three times faster...
 
interesting results, everyone knows C is much faster than Java.
Its the C++ versus Java that people argue about.
If you have time to do a version in C++ it would be interesting to see how fast it can do it compared to java/c
 
Back