Monday, November 19, 2012

SEVERE: The system cannot infer the transport information from the http://localhost:8080/axis2/services/


Issue
 
Web Services that returns the following exception:

SEVERE: The system cannot infer the transport information from the  http://localhost:8080/axis2/services/DataService URL.
Exception in thread "main" org.apache.axis2.AxisFault: The system cannot infer the transport information from the  http://localhost:8083/axis2/services/DataService URL.

 
Solution

If the URL is really correct, try checking the spaces on the url.

Example of a undesirable initial space causing the problem (yellow):
String url = " http://localhost:8080/axis2/services/DataService";

To avoid such problem, instead of:
DataServiceStub service = new DataServiceStub(url)

do:
DataServiceStub service = new DataServiceStub(url.trim());

Thursday, November 15, 2012

Cannot resolve the name '...' to a(n) 'type definition' component


Suppose you are validating the a xml file against a XML Schema (a .xsd file), and an exception message returns like this:


org.xml.sax.SAXParseException: src-resolve: Cannot resolve the name 'createNewAdRequest' to a(n) 'type definition' component.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.error(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)

Check the if the type declaration is accessible, in order words, has the same scope.


For instance, here is the test.xml file:

<ClassifiedList>
    <ClassifiedAd>
        <id>1234</id>
        <content>
            Vintage 1963 T-Bird.  Less than 300 miles.
            Driven by my daughter until I took it away.
            Serious inquires only. 555-3264 after 7 PM.
        </content>
        <endDate>4/15/2007</endDate>
        <startDate>4/1/2007</startDate>
        <createNewAdRequest>
            <content>YYY</content>
            <endDate>YYY</endDate>
        </createNewAdRequest>
    </ClassifiedAd>  
</ClassifiedList>


And here the respctive XML Schema file - test.xsd:

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">

    <xs:element type="ClassifiedList" name="ClassifiedList" />
    <xs:complexType name="ClassifiedList">
      <xs:sequence>
        <xs:element minOccurs="0" type="ClassifiedAd"
                        name="ClassifiedAd" maxOccurs="unbounded" />
      </xs:sequence>
    </xs:complexType>

    <xs:element type="ClassifiedAd" name="ClassifiedAd" />
    <xs:complexType name="ClassifiedAd">
      <xs:sequence>
        <xs:element type="xs:int" name="id" />
        <xs:element type="xs:string" name="content" />
        <xs:element type="xs:string" name="endDate" />
        <xs:element type="xs:string" name="startDate" />
        <xs:element type="createNewAdRequest" name="createNewAdRequest" maxOccurs="unbounded" />
      </xs:sequence>
    </xs:complexType>
       
    <xs:element name="createNewAdRequest">
        <xs:complexType name="createNewAdRequest">
            <xs:sequence>
                <xs:element type="xs:string" name="content" />
                <xs:element type="xs:string" name="endDate" />
            </xs:sequence>
        </xs:complexType>
    </xs:element>
   
</xs:schema>


The text highlighted in blue defines a new type under its element scope.
So, it is not accessible from outside.
Switch the blue text by the orange below, where the complex type is defined in the same scope of  <ClassifiedList> element.
       
        <xs:complexType name="createNewAdRequest">
            <xs:sequence>
                <xs:element type="xs:string" name="content" />
                <xs:element type="xs:string" name="endDate" />
            </xs:sequence>
        </xs:complexType>
   

Tuesday, November 13, 2012

Maven and The "exception in thread main java.lang.noclassdeffounderror"



This a tip for that situation when everything was checked and looks fine but eventually doesn't work!  : )

Short story
If you are sure that your configuration is correct and has also tried everything, then try creating a new project from scratch and after migrate the code.



Long story

Issue:

Output:

Exception in thread "main" java.lang.NoClassDefFoundError: validation/schema/XasValidator
Caused by: java.lang.ClassNotFoundException: validation.schema.XasValidator
  at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: validation.schema.XasValidator. Program will exit.



Solution

After checking all possible configurations in project, including of course the pom.xml file, the problems still remained.

The solution came from creating a new project from scratch.

mkdir xalValidator
cd xalValidator
mvn archetype:create -DgroupId=com.adr -DartifactId=xalValidator -DpackageName=validator.schema -Dversion=1.0

After I copied the pom.xml file from the deffective project to the new one.
Then, I migrated the code from the old to the new project.
Finally I tested the project generating the jar file.
Yesss, success.

Sunday, November 4, 2012

Improving The Site Security - Checking .asc files with PGP (GPG)


This example shows step by step how to check security issues about a downloaded file applying concepts exposed on Apache's tutorial .


If you just want to test integrity, a simpler alternative is using MD5 or SHA, but checking both, integrity and signature, offers a better security solution.
This makes sense when downloading security libraries to a secure service or web service.


How can you assure that your site is secure if its libraries are not checked at all?
Just because you use SSH and all the stuff about, how can you be sure that the libraries were not tampered and they are exactly what you expect to be?
So, it's necessary to expend some time to make sure and construct security from the basis.

If downloading a Apache file, you'll probably has already seen on the download page, the following:

It is essential that you verify the integrity of the downloaded files using the PGP or MD5 signatures.
The PGP signatures can be verified using PGP or GPG.
First download the KEYS as well as the asc signature file for the relevant distribution.
Make sure you get these files from the main distribution site, rather than from a mirror. Then verify the signatures using

% pgpk -a KEYS
% pgpv downloaded_file.asc
or
% pgp -ka KEYS
% pgp downloaded_file.asc
or
% gpg --import KEYS
% gpg --verify downloaded_file.asc




On the procedure above, you import the keys (KEYS) in order to be able to verify the downloaded files and check the files' integrity.
 
Nevertheless, suppose that you don't have the keys from KEYS file, or yet if you'd like to check further, considering the message returned by the --verify option:
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.



Below, it's shown an example of a procedure based on "web of trust".
Although long at first sight, many steps done once will not be repeated twice. The data collected is maintained and the next steps become faster and easier.
First time is a first time... well, you just know how it works!


This example uses GnuPG, but the concepts are the same, so feel free to work with your favorite environment and tool.

From the documentation we have:
gpg is the OpenPGP part of the GNU Privacy Guard (GnuPG). 
It is a tool to provide digital encryption and signing services  using  the  OpenPGP      standard.
...
The gpg command is the standalone version of gpg.  
For desktop use you should consider using gpg2.

The example uses two files downloaded from wss4j project which represents a possible and real use case when a developer is creating secure solutions.

The version downloaded by this time were:

wss4j-bin-1.6.7.zip (the binary)
wss4j-bin-1.6.7.zip.asc  (and its respective .asc file)


1. First, put the both target files under the same folder.

2. Disvover the ID, typing the command (GnuPG must be installed):
 gpg wss4j-bin-1.6.7.zip.asc
 or
 gpg --verify wss4j-bin-1.6.7.zip.asc

The output:
gpg: Signature made Wed 25 Jul 2012 11:25:57 AM BRT using RSA key ID 0AD53983
gpg: Can't check signature: public key not found

3. Getting the public key from a trusted source, a public well known and reliable keyserver:
 gpg --keyserver pgpkeys.mit.edu --recv-key 0AD53983

4. After importing, repeat the 3rd step.
Now it shall recognize the signature against the target file:
 gpg wss4j-bin-1.6.7.zip.asc
 or
 gpg --verify wss4j-bin-1.6.7.zip.asc

Output:
gpg: Signature made Wed 25 Jul 2012 11:25:57 AM BRT using RSA key ID 0AD53983
gpg: Good signature from "Colm O hEigeartaigh <coheigea@apache.org>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DB45 ECD1 9B97 514F 7271  05AE 67BF 80B1 0AD5 3983

Notice that the last 8 digits from the fingerprint shall be equal to the key ID.


5. Getting the fingerprint to validade the public key:
 gpg --fingerprint 0AD53983

Output:
pub   2048R/0AD53983 2011-02-10
Key fingerprint = DB45 ECD1 9B97 514F 7271  05AE 67BF 80B1 0AD5 3983
uid                  Colm O hEigeartaigh <coheigea@apache.org>
sub   2048R/1C6209E9 2011-02-10



6. Checking if the public key really exists.

Above (3rd step), we got the public key from MIT.

So, checking using another trusted source to confirm, we search by "coheigea" at RedIris .

The result returned:


The ID 0AD53983 was also found on another reliable public keyserver.
Remember that a keyring or keyserver may be tampered also, but exactly the two ones at the same time... it's something quite remote.





7. Who is coheigea@apache.org?

We can not call him on the phone or cell.
Imagine, zillions of people doing that... No way!
So, that's when networking comes in.


Our first attempt, following the Apache's documentation is to point to the Apache's web of trust at
 http://people.apache.org/~henkp/trust/apache.html


He was not found there and probably is not up to date or something else, but after a fast search on web, he was found on twitter  and belongs to the Apache community.



8. Does somebody else know and endorse Colm O hEigeartaigh?

Let's verify, issuing:
 gpg wss4j-bin-1.6.7.zip.asc
 or
 gpg --verify wss4j-bin-1.6.7.zip.asc

Output:
gpg: Signature made Wed 25 Jul 2012 11:25:57 AM BRT using RSA key ID 0AD53983
gpg: Good signature from "Colm O hEigeartaigh "
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DB45 ECD1 9B97 514F 7271  05AE 67BF 80B1 0AD5 3983

It's signed by itself.
To discover more details about that signature, do:
 gpg --list-sigs 0AD53983

Output:
pub   2048R/0AD53983 2011-02-10
uid                  Colm O hEigeartaigh
sig          DEDEAB92 2012-02-06  [User ID not found]
sig 3        0AD53983 2011-02-10  Colm O hEigeartaigh
sub   2048R/1C6209E9 2011-02-10
sig          0AD53983 2011-02-10  Colm O hEigeartaigh

On the output (yellow), we see that it is signed by 0AD53983.
Who is 0AD53983?

To answer, we do:
 gpg --keyserver pgpkeys.mit.edu --recv-key DEDEAB92

Output:
gpg: requesting key DEDEAB92 from hkp server pgpkeys.mit.edu
gpg: key DEDEAB92: public key "Sergey Beryozkin (Release Management) " imported
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   3  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 3u
gpg: next trustdb check due at 2012-12-02
gpg: Total number processed: 1
gpg:               imported: 1


9. Who is sberyozkin@gmail.com?                                                                        
Again, the same problem, we can not get a direct contact, but we can check the informations to discover if they match and are coherent.

Searching for "sberyozkin" we find many sources including his twitter, noticing that he also works on Apache's community and etc.


As we see, details match...

So, this is web of trust, not directly endorsed but endorsed indirectly by networking using social media and other sources.

Conclusion:                                                                                                           
His ID is found on two public trusted sources, and he's a well known and active member of Apache's community working on the respective subject and also endorsed by somebody else "on the field".

Enough, done.

If the target were not well known, certainly it would be necessary to extend the networking survey, to grab enough information to create a context that could give us some kind of security.


Important Considerations


1. Alternatives and Additional Tips

Many sites have a section on the download page that shows how to test integrity and offer a key list of developers.

Very handy because you enrich your keyring faster.
More practical, instead of importing one by one when you need, as done above by example.

As referred on the beginning, Apache offers the keys on the download pages, for instance go to apache geronimo (the application server).

You find there a reference to a KEYS file containing such keys.


2. Good practices

From Apache's tutorial, we get:

...
Attackers can easily create new keys similar to yours with identical user IDs and comments.
Such a public key may be introduced to your keyring when you download keys from a public keyserver or as part of an import.

...
Trusting that an import contains only the owner's public key is not recommended.
The import may contain additional public keys (either intentionally or not).
So, when using an import, always verify the key ID of interest from another source.

...
                                                                                   

In practice, in most of the cases, it is not possible to get in touch with the owner.
So, consider some good and important additional security practices:

1. Keep control of the keyring's number of registries.
Check before and after the import.

2. After an import, always check it, otherwise your keyring will loose confidence.

3. Create a backup after an import operation and periodically.
Check http://www.apache.org/dev/openpgp.html#backup-private


To count the number of registries of your keyring you can use a script like this :

#!/bin/sh
# last update: 11/08/2012 11:34:53 AM
# name: gpgcounter
#
# target: returns the number of registries on a GnuPG keyring.
#
#IMPORTANT NOTE:
#On the GnuPG's man page, there is a note:
#  Avoid  using the output of this command in scripts or other programs
#  as it is likely to change as GnuPG  changes.
#
# So, you shall test the script with your version before using it.
#
# syntax:
#
#   gpgcounter
# set here the GnuPG version used for creation or test
GPGVER="1.4.9"

pubs=$(gpg --list-keys | grep '^pub' | uniq -c | awk '{sum += $1} END {print sum}')
uids=$(gpg --list-keys | grep '^uid' | uniq -c | awk '{sum += $1} END {print sum}')
subs=$(gpg --list-keys | grep '^sub' | uniq -c | awk '{sum += $1} END {print sum}')
gpgver=$( gpg --version | grep ^gpg | awk '{print $3}')

if ! [ "$gpgver" = "$GPGVER" ] ; then
    echo ""
    echo "**************************************************************************"
    echo "[WARN]: this script was tested with gpg v.$GPGVER."
    echo "  The current GnuPG version (v.$gpgver) differs from v.$GPGVER  (See: gpg --version)"
    echo "     Check the following command in order to confirm its validity:"
    echo "       gpg --list-keys"
    echo "  It shall return an output formatted like the following example:    "
    echo "       pub   4096R/2E63F262 2012-07-13"
    echo "       uid          John Xiao (CODE SIGHING KEY) "
    echo "       sub   4096R/64F9AF14 2012-07-13"
    echo ""
    echo " If the output format is different from the example, the script must be altered to reflect the new changes."
    echo ""
else
    echo "[INFO]: gpgcounter totals (pub, uid, sub): $pubs, $uids, $subs at $(date +%y%m%d_%H%M%S) by $USER for gpg v.$GPGVER"
fi  


If *nix environment, create the file gpgcounter.sh copying the content above, then do:
chmod 755 gpgcounter.sh
./gpgcounter.sh

The little script will return something like this:
 [INFO]: gpgcounter totals (pub, uid, sub): 6, 7, 5 at 121106_174037 by alsdias  for gpg v.1.4.9                                   

Save the information to an encrypted control file somewhere else.
If your keyring is tampered, certainly you'll have a better chance to discover it.

NOTE:                                                                                                  

On the GnuPG's man page, there is a note:
   Avoid  using the output of this command in scripts or other programs 
   as it is likely to change as GnuPG  changes.

So, the script also tests your current version.
If the current version differs, it returns a warning.
It also has a note for which version it was created, echoing it into the statistics returned.
That way, the script checks and alerts the user about the version that it was created or tested for.



Conclusion                                                                                                              


Security comes from the basis.
Import is a key point.
Think about it.


Additional sources:
Apache's tutorial
apache geronimo download site containing instructions

Thursday, November 1, 2012

gpg - Not enough random bytes available - Best Solution



After issuing the command
  gpg --gen-key

trying to generate a key pair using gpg I got the message:


Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 280 more bytes)


BEST SOLUTION

After some search on web, I got different procedures, and I began testing them.

The one that really worked successfully was that using rng-tools.
A tutorial to set up rng-tools can be found at:
http://www.howtoforge.com/helping-the-random-number-generator-to-gain-enough-entropy-with-rng-tools-debian-lenny

besides being the most practical solution.
The gpg command started working without failures, being stable.

I've create a summary with additional information for the alternative solutions which are described below just in case being useful someway.

Unfortunately, the alternative solutions using  ls / -R  or  find / > /dev/null  and the  --no-use-agent  flag were not always successful.
I've got some success but also failures, using the same environment and procedures. So far, for professional usage I've discarded, but they can be a valuable attempt if you have no rights to install packages on the target environment and need a temporary solution during an emergency.



ALTERNATIVE SOLUTIONS

Go to terminal and check your entropy:
  watch cat /proc/sys/kernel/random/entropy_avail

You're gonna get a real time feedback as an entropy number.
The biggest, the best.
Lower system activity, lower entropy number.

GPG requires high entropy.


Go to a second shell and start the following command:
  ls / -R


Let it running.

Return to the prior terminal - the entropy shall be increasing.

Still let ls command running.


Go to a third terminal to generate the key pair using this command:
   gpg --gen-key --no-use-agent

That's it. Done.
But...


Why the --no-use-agent flag?

Checking man (man gpg), you see:

       --no-use-agent
              Try to use the GnuPG-Agent.  With this option, GnuPG first tries
              to connect to the agent before it asks for a  passphrase.  --no-
              use-agent disables this option.


       --gpg-agent-info
              Override    the    value    of    the    environment    variable
              'GPG_AGENT_INFO'’. This is only used when --use-agent  has  been
              given.   Given  that this option is not anymore used by gpg2, it
              should be avoided if possible.


Probably this issue is already fixed in newer versions.



If you need to generate more entropy, try this:

- using find command by other way:

http://michaelheap.com/not-enough-random-bytes-available-gpg-signing/
- using stress tool from:
http://packages.debian.org/squeeze/stress
  suggested by:
http://stackoverflow.com/questions/12257653/pgp-not-enough-random-bytes-available-please-do-some-other-work-to-give-the-o



If  this procedure still fails, there is also, an interesting procedure using rng-tools at:

https://bugs.launchpad.net/ubuntu/+source/gnupg/+bug/706011


To check you gpg version:
  gpg --version


The information compiled here was based on several posts, among them:

http://avinashpillai.blogspot.com/2011/12/gpg-keygen-not-enough-random-bytes.html
http://stackoverflow.com/questions/11708334/pgp-asymmetric-not-enough-random-bytes-available-please-do-some-other-work-to
http://stackoverflow.com/questions/12257653/pgp-not-enough-random-bytes-available-please-do-some-other-work-to-give-the-o
http://michaelheap.com/not-enough-random-bytes-available-gpg-signing/
https://bugs.launchpad.net/ubuntu/+source/gnupg/+bug/706011

Friday, October 26, 2012

Colocou na Internet, já é e sem "undo".




Uma amiga nossa postou isso no FB:


     IMPORTANTE...
     A todos meus contatos, PEÇO UM FAVOR: com as mudanças do Facebook,
agora todos ficam sabendo das nossas coisas, gente que nem está nos
nossos contatos, só porque um contato faz um "comentário" ou "curte"
algo nosso. Gentileza observar: aqui mesmo, acima a esquerda, posicione
o mouse sobre a minha foto , o meu avatar, (sem clicar). Espere até
aparecer uma janela e posicione o mouse sobre o icone "Amigos" (também
sem clicar), depois vá até "Configurações" e aí sim clique. Vai aparecer
uma lista: clique em "Comentários e Curtir" desmarcando essa opção.
Assim minhas atividades ficarão restritas aos meus amigos e familiares,
não se tornando domínio público. Se quiser copiar e colar no seu mural,
irá assegurar a sua privacidade.
     Obrigada a todos. (UMA BOA IDEIA)


é isso mesmo?
-----------------------------------

Certamente, os relacionamentos não são alterados na base de dados em função da configuração.

A configuração é um ajuste pessoal, que filtra a visualização da informação que você deseja ver, e que certamente não interfere na máquina de busca e associações das medias sociais.

Desativar uma opção pode garantir uma personalização da visualização, apenas.
Por que?
Imagine que a configuração pessoal afetasse o banco de dados, então uma vez desabilitada, não poderia ser novamente ativada, sem possível perda de informações.

Imagine que A é amigo de B, e B amigo de C.
Todos deixaram inicialmente suas configurações habilitas.
Oras, se B ao desabilitar alterasse a base de dados, então exigiria a sincronização desses dados entre A, B e C.
Certamente desapareceriam das telas de A e C informações, misteriosamente, por decorrência da ação de B.
Muitos casos ficariam estranhos.
Isso examinado nesse caso trivial é muito simples, mas a replicação recursiva por toda a malha de associações do banco de dados seria imprevisível e extremamente dispendiosa.
E quando falamos de banco de dados de medias sociais, é algo gigantesco, que qualquer DBA "começa a suar frio" só de pensar em perder a consistência do banco.

Portanto, é que nem cofrinho do tipo "porquinho inquebrável".
Colocou a moedinha ali, nunca mais!

Outro fator "chave" - o interesse econômico.
Por trás disso, existe o interesse comercial, porque a massa de dados é vendida (ou arrendada, alugada, seja lá como for... ) para operações de data warehousing, que extraem perfis e estatística do tipo que alimentam os sites, além da indústria de marketing, e toda a sorte de especulação de comportamento, rastreamento e tendências sociais.

É daí que vem o subsídio sustenta o caráter "free" da rede.


-----
Social Media Tracking
Rastreamento em Media Social

Estendendo o texto acima, sobre a possibilidade de um "undo" em medias sociais, ou seja desfazer algo enviado, segue abaixo, uma pesquisa realizada (embora já o saibamos de ouvir falar) sobre as informações colhidas por aplicativos "facebook" (e cia).


 

 Observando-a, poderemos entender melhor a questão.

A situação é ainda mais "complicada" porque o rastreamento é realizado por diversas alternativas.
A coleta de dados provenientes de fontes diversas é consolidada em pesquisas.

Havendo várias fontes alternativas, mesmo que um dado pudesse ser eventualmente apagado ou modificado, poderia ser reescrito.
Por que?
É o efeito colateral da replicação da informação.
Através dela e mediante a consolidação de dados, pode-se obter uma taxa de confiabilidade do dado que foi inserido ou a ainda aponta-se a diversidade de alternativas para aquele usuário.
Por exemplo, um mesmo usuário com vários e-mails ou endereços díspares, incoerentes, etc.

A replicação de fontes promove o enriquecimento multidimensional do data warehouse.
Ou seja, gradativamente, o conjunto de dados é enriquecido com novas informações que permitam gerar um novo contexto e que favoreçam a criação de uma dimensão extra no banco de dados.
Cada dimensão pode ser uma aspecto da sua vida...

Conclusão, até mesmo a singela "mentira" como forma de proteção pessoal fica cada vez mais frágil ou inócua.

Sugestões para reduzir sua exposição, lembre-se:

1. Qualquer via Internet é pública e sem volta (sem "undo").
    Pense muito bem antes de digitar.

2. Adote como regra não trafegar informações pessoais.
    Fique restrito àquelas exclusivamente necessárias ou já públicas de fato (seu e-mail, por exemplo).

3. O dado irrelevante de hoje, ou até mesmo inocente, pode ser muito relevante amanhã.
    Porque um novo contexto passa a fazer sentido para algo que não o tinha isoladamente.

4. Qualquer informação pode adquirir um contexto à medida que é somada a outra.
    Além das análises dimensionais de data warehouses temos que lembrar os recursos de análise oferecidos pela inteligência artificial (AI).
    Por exemplo, podemos realizar classificações utilizando redes kohonem, que podem até mesmo classificar eletrocardiogramas (ECG).
    Por curiosidade, veja o site:
   
http://www.pee.ufrj.br/teses/?Resumo=2000034001

5. Tudo que é de graça tem preço caro embutido de alguma forma e não declarado explicitamente, que talvez você não desejasse arcar.

6. Evite instalação de softwares gratuitos.
    Grande parte dos softwares que prestam serviços simples de áudio, vídeo, comunicação, recursos gráficos e etc. são grandes candidatos a vetores de informação social como forma de autosustentação econômica.
    Hardware hoje é barato.
    Se precisar usar software não confiável, eleja uma máquina só para isso.
    Reserve outra máquina para uso pessoal e privado apenas com o que for considerado seguro e necessário.

7. Utilize e pesquise ferramentas de segurança pagas e dê preferência a software pago de fonte segura.
    Lembre-se: nada trabalha de graça - não há milagre.

8. Criptografe o que for pessoal com entropia acima de 1024bits.
    E não mantenha a chave no mesmo ambiente...

Configuring Apache2 Virtual Host - Creating a working Directory

Depending on the apache2 version and environment, the configuration has some different details.

Below, you get  summarized procedures to install on Windows 7 and Debian 5.



WINDOWS 7

Edit:

$INSTALL_DIR\conf\httpd.conf

Usually at:
C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf

and find:
    # Redirect permanent /foo http://localhost/bar
then add the following:       

        # mapping dev.php directory
        Alias /devphp "D:/work/dev/php"
       
                Options Indexes MultiViews
                AllowOverride None
                Order allow,deny
                Allow from all
       



See additional examples on debian section below.


DEBIAN 5

If a directory requires permission, such as $HOME (the user's home), then it's required to add the user to the apache2's data group.
Below you find three examples (yellow).



# adding a new user to apache2 data group:
 sudo adduser www-data user1
# checking:
 groups www-data
# edit the file:
 /etc/apache2/sites-available/default
# and add the following at the end but before the </VirtualHost> tag:

<VirtualHost *:80>
    ...
        # usual configuration on debian to share local files
    Alias /doc/ "/usr/share/doc/"
    <Directory "/usr/share/doc/">
        Options Indexes MultiViews FollowSymLinks
        AllowOverride None
        Order deny,allow
        Deny from all
        Allow from 127.0.0.0/255.0.0.0 ::1/128
    </Directory>

    # configuration for local development purpose without requiring to add users to apache2's www-data group
    Alias /dev "/home/dev"
  <Directory "/home/dev">
      Options Indexes MultiViews FollowSymLinks
      AllowOverride None
      Order deny,allow
      Deny from all
      #Allow from all
      Allow from 127.0.0.0/255.0.0.0 ::1/128
      #DirectoryIndex index.html
 </Directory>

    # configuration for personal local development purpose requiring to add the user to apache2's www-data group
    Alias /devphp "/home/alsdias/work/dev/php"
  <Directory "/home/alsdias/work/dev/php">
      Options Indexes MultiViews FollowSymLinks
      AllowOverride None
      Order deny,allow
      Deny from all
      #Allow from all
      Allow from 127.0.0.0/255.0.0.0 ::1/128
      #DirectoryIndex index.html
  </Directory>
</VirtualHost>
   
# restart service:
sudo /etc/init.d/apache2 restart


NOTE:

Notice the following directions.
They are important and are self-explained:

      #Allow from all
      Allow from 127.0.0.0/255.0.0.0 ::1/128
      #DirectoryIndex index.html



Sunday, October 21, 2012

maven references failure - workaround

Overview

This maven use case presents three alternatives to solve its dependencies when the usual method fails - maven update.

It was created from direct experience during a CXF web service project using eclipse and rampart library.

The alternatives shown are:

1. Upgrades to a newer version that probably has already been fixed, but solving some last issue manually.

2. Comments about the maven's exclusion element alternative, useful in cases of fewer occurrences.

3. Shows a solution when there are too many occurrences and the prior alternative comes to be time consuming.



In a CXF web service project requiring rampar 1.4 library, it was added the following dependencies:

<dependency>
  <groupId>org.apache.rampart</groupId>
  <artifactId>rampart-policy</artifactId>
  <version>1.4</version>
</dependency>

<dependency>
  <groupId>org.apache.rampart</groupId>
  <artifactId>rampart-core</artifactId>
  <version>1.4</version>
</dependency>

The first approach is to add the dependencies as usual.
After adding them and performing the respective update, it was generated a huge list of messages about missing dependencies - exactly 73 lines.

Here is a small sample of the error messages generated:

Description  Resource  Path  Location  Type
ArtifactDescriptorException: Failed to read artifact descriptor for bouncycastle:bcprov-jdk13:jar:132: ArtifactResolutionException: Failure to transfer bouncycastle:bcprov-jdk13:pom:132 from http://ws.zones.apache.org/repository2 was cached in the local repository, resolution will not be reattempted until the update interval of apache-ws-snapshots2 has elapsed or updates are forced. Original error: Could not transfer artifact bouncycastle:bcprov-jdk13:pom:132 from/to apache-ws-snapshots2 (http://ws.zones.apache.org/repository2): Connect times out  pom.xml  /SecureClient  line 1  Maven Dependency Problem
Missing artifact antlr:antlr:jar:2.7.7  pom.xml  /SecureClient  line 1  Maven Dependency Problem
Missing artifact aopalliance:aopalliance:jar:1.0  pom.xml  /SecureClient  line 1  Maven Dependency Problem
Missing artifact asm:asm:jar:3.3.1  pom.xml  /SecureClient  line 1  Maven Dependency Problem
...
...




If maven references fail downloading the respective jars (broken references) a workaround is to try another version, preferably a newer one.

Solution

I began switching the rampart version to a newer one:

<dependency>
    <groupId>org.apache.rampart</groupId>
    <artifactId>rampart-core</artifactId>
    <version>1.6.2</version>
</dependency>


but still failed due to one dependency : xalan 2.7.1 .
The message returned was "an invalid file".

The workaround was to complete the task manually.

The xalan dependency was downloaded from
http://mvnrepository.com/artifact/xalan/xalan/2.7.1
and the the new jar file was copied overwriting the older one.


How do we overwrite the older jar?

Copy the new jar to the local repository.



Where is the local repository?

To find out where is the local repository, edit:
 /home/$USER/.m2/settings.xml

and check the element:
<localRepository>${user.home}/work/dev/java/repository/mvn_repo</localRepository>


Alternatively, if using eclipse, check the preferences:
menu > windows > preferences > maven > user settings > local repository


How do we get the reference for a new version?

Go to maven repository.
There you get the snippet for the pom dependency and if necessary, you may download the jar file also.
Check it on the link above.


Suppose that switching to newer versions is not possible due to compatibility issues.
 

How to fix?

Checking the pom file on the local repository, I've noticed a huge list of dependencies created automatically by maven during the update at the first attempt.

A possible solution is to use exclusions.
For instance:

<dependency>
  ...
  <exclusions>
    <exclusion>
      <groupId>javax.jms</groupId>
      <artifactId>jms</artifactId>
    </exclusion>
  ...
  </exclusions>
</dependency>

Considering that I had a pom file containing more than 300 lines, dozens of dependencies, that procedure would be time consuming.
Not advisable, at all!

So I had to work on an alternative solution.
Instead of excluding explicitly, why not doing implicitly, and let the maven procedure and the compiler warn if something wrong goes on?

To do so, I downloaded the two jar files from  maven central repository :

rampart-core-1.4.jar
rampart-policy-1.4.jar


After it was added the rampart dependencies:

<dependency>
  <groupId>org.apache.rampart</groupId>
  <artifactId>rampart-policy</artifactId>
  <version>1.4</version>
</dependency>

<dependency>
  <groupId>org.apache.rampart</groupId>
  <artifactId>rampart-core</artifactId>
  <version>1.4</version>
</dependency>


Then, it was performed the manual installations using the following commands:

mvn install:install-file -Dfile=rampart-core-1.4.jar -DgroupId=org.apache.rampart  -DartifactId=rampart-policy -Dversion=1.4 -Dpackaging=jar
mvn install:install-file -Dfile=rampart-core-1.4.jar -DgroupId=org.apache.rampart  -DartifactId=rampart-core -Dversion=1.4 -Dpackaging=jar




In the code below, it's possible to notice the "rampart-policy-1.4.pom" file generated by the manual installation omitting the dependencies:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>org.apache.rampart</groupId>
  <artifactId>rampart-policy</artifactId>
  <version>1.4</version>
  <description>POM was created from install:install-file</description>
</project>


Finally, the project was updated using maven > update dependencies, without no errors.

The final result with rampart dependency correctly assigned:



The successful build:


Saturday, October 20, 2012

Tip Section: The still valuable old stuff - consider it

Usually, we get the general idea that old stuff is useless or garbage.

For instance, if you need to understand in details the concepts of stubs and skeleton, messaging, transparent fail-over, a full description of ejb principles, etc., surely you'll find it on
Mastering Enterprise JavaBeans - Ed Roman.

It's about EJB2, but still a classic, cause the concepts are useful to understand many other technologies like jabber, JMS, EJB3, Web Services, etc.

A new book usually do not waist resources repeating in details full old stuff but still important and present on the base of the new one.
At most, it offers a summary about it.
Very reasonable.



Remember that the new lays on the old, and usually the new is a next step and rarely destroys the prior knowledge, but improves it, extending it.

Using SNV with subversive and Eclipse #2 - Resource unavailable/locked


If you get the message that an operation, like commit, update, etc, was not possible due to
"unavailable resource" or "locked resource", but the resource was unlocked and it's available,
and others things like that while performing a versioning task, do an outside test.
If you have another versioning control, like tortoise, test the repository repeating the same operations.
If successful, you can conclude that the problem comes from the eclipse's versioning environment.
This happens due to some plugins' malfunction. Go to the eclipse workspace directory where the 
".metadata" folder is. This directory keeps all the workspace configurations. Close eclipse.
The direct way, delete the ".metadata" directory. When you reopen eclipse, the personal configurations
were gone and you're gonna have to do them again. If you keep a backup of this ".metadata" directory, all you
have to do is to copy it.
Now, repeat the versioning operations again to make sure the problem is solved.

How to add a script on crontab - short steps


  Vim is the default editor of crontab.
  So, I'm gonna use it here.
 
  Create your script file - any editor, but using vim:
    vim clean_wtmpx.sh
 
  Add your command line - type 'i' to get into the insert mode and then these commands:
  #!/bin/sh
  cat /dev/null > /var/adm/wtmpx
 
  To save and close - hit "esc" to get out the insert mode and then type:
  :x
 
  The first line is used to indicate a shell script on bash. It's optional.
 
  Set permissions on the new file created:
  chmod 775 clean_wtmpx.sh
 
  Test you command before, executing it. Type on prompt:
  ./clean_wtmpx.sh
 
  Open the crontab on edit mode:
      crontab -e
 
  Add this line:
  20 22 * * * dir_path/clean_wtmpx.sh 2>> dir_path/crontab.lob
 
  To save and exit the crontab:
  esc
  :x
 
  Explaining the crontab's command line:
  dir_path is your path the "clean_wtmpx.sh" file.
  Everyday at 10:20 PM the script "dir_path/clean_wtmpx.sh" will run.
  If just in the case of some output is generated (for instance error messages) is saved on "crontab.log" file.
  Certainly, this is not the case of your command, so you can ignore the log file, setting like this:
 
  20 22 * * * dir_path/clean_wtmpx.sh
 
 
  Extra - short summary:
  To list what you have on crontab, type:
  contrab -l
 
  The crontab command line is divided in space delimited fields.
  The first 5 fields is related about time settting followed by a command field, using this format:
      minutes  hour  day_of_month month day_of_week   command
 
  Tip:
  Always perform a crontab's test, creating a fake line to start your command in the next one or two minutes in order to check it.

konqueror message: "cold not register with dcop"


If you get a message trying to open a file, usually a package like jar, tar, zip, restart konqueror as root and try again.



Privacidade na Media Social - Posso desfazer?


Uma amiga nossa postou isso no FB:


     IMPORTANTE...
     A todos meus contatos, PEÇO UM FAVOR: com as mudanças do Facebook,
agora todos ficam sabendo das nossas coisas, gente que nem está nos
nossos contatos, só porque um contato faz um "comentário" ou "curte"
algo nosso. Gentileza observar: aqui mesmo, acima a esquerda, posicione
o mouse sobre a minha foto , o meu avatar, (sem clicar). Espere até
aparecer uma janela e posicione o mouse sobre o icone "Amigos" (também
sem clicar), depois vá até "Configurações" e aí sim clique. Vai aparecer
uma lista: clique em "Comentários e Curtir" desmarcando essa opção.
Assim minhas atividades ficarão restritas aos meus amigos e familiares,
não se tornando domínio público. Se quiser copiar e colar no seu mural,
irá assegurar a sua privacidade.
     Obrigada a todos. (UMA BOA IDEIA)


é isso mesmo?
-----------------------------------

Certamente, os relacionamentos não são alterados na base de dados em função da configuração.
A configuração é um ajuste pessoal, que filtra a visualização da informação que você deseja ver, e que certamente não interfere na máquina de busca e associações das medias sociais.

Desativar uma opção pode garantir uma personalização da visualização, apenas.
Por que?
Imagine que a configuração pessoal afetasse o banco de dados, então uma vez desabilitada, não poderia ser novamente ativada, sem possível perda de informações.

Imagine que A é amigo de B, e B amigo de C.
Todos deixaram inicialmente suas configurações habilitas.
Oras, se B ao desabilitar alterasse a base de dados, então exigiria a sincronização desses dados entre A, B e C.
Certamente desapareceriam das telas de A e C informações, misteriosamente, por decorrência da ação de B.
Muitos casos ficariam estranhos.
Isso examinado nesse caso trivial é muito simples, mas a replicação recursiva por toda a malha de associações do banco de dados seria imprevisível e extremamente dispendiosa.
E quando falamos de banco de dados de medias sociais, é algo gigantesco, que qualquer DBA "começa a suar frio" só de pensar em perder a consistência do banco.

Portanto, é que nem cofrinho do tipo "porquinho inquebrável".
Colocou a moedinha ali, nunca mais!

Outro fator "chave" - o interesse econômico.
Por trás disso, existe o interesse comercial, porque a massa de dados é vendida (ou arrendada, alugada, seja lá como for... ) para operações de data warehousing, que extraem perfis e estatística do tipo que alimentam os sites, além da indústria de marketing, e toda a sorte de especulação de comportamento, rastreamento e tendências sociais.

É daí que vem o subsídio sustenta o caráter "free" da rede.

-----
Social Media Tracking
Rastreamento em Media Social



Pesquisa sobre facebook
http://www.zonealarm.com/blog/index.php/2012/10/data-hungry-apps



Estendendo o texto acima, sobre a possibilidade de um "undo" em medias sociais, ou seja desfazer algo enviado, segue abaixo, uma pesquisa realizada (embora já o saibamos de ouvir falar) sobre as informações colhidas por aplicativos "facebook" (e cia).

Observando-a, poderemos entender melhor a questão.


A situação é ainda mais "complicada" porque o rastreamento é realizado por diversas alternativas.
A coleta de dados provenientes de fontes diversas é consolidada em pesquisas.

Havendo várias fontes alternativas, mesmo que um dado pudesse ser eventualmente apagado ou modificado, poderia seria reescrito.
Por que?
É o efeito colateral da replicação da informação.
Através dela e mediante a consolidação de dados, pode-se obter uma taxa de confiabilidade do dado que foi inserido ou a ainda aponta a diversidade de alternativas para aquele usuário.
Por exemplo, um mesmo usuário com vários e-mails ou endereços díspares, incoerentes, etc.

A replicação de fontes promove o enriquecimento multidimensional do data warehouse.
Ou seja, cada vez mais forma-se um conjunto de dados que permite gerar um contexto que favoreça a criação de uma dimensão extra no banco de dados.
Cada dimensão pode ser uma aspecto da sua vida...

Conclusão, até mesmo a singela "mentira" como forma de proteção pessoal fica cada vez mais frágil ou inócua.

Sugestões para reduzir sua exposição, lembre-se:

1. Qualquer via Internet é pública e sem volta (sem "undo").
    Pense muito bem antes de digitar.

2. Adote como regra não trafegar informações pessoais.
    Fique restrito àquelas exclusivamente necessárias ou já públicas de fato (seu e-mail, por exemplo).

3. O dado irrelevante de hoje, ou até mesmo inocente, pode ser muito relevante amanhã.
    Porque um novo contexto passa a fazer sentido para algo que não o tinha isoladamente.

4. Qualquer informação pode adquirir um contexto à medida que é somada a outra.
    Além das análises dimensionais de data warehouses temos que lembrar os recursos de análise oferecidos pela inteligência artificial (AI).
    Por exemplo, podemos realizar classificações utilizando redes kohonem, que podem até mesmo classificar eletrocardiogramas (ECG).
    Por curiosidade, veja o site:
    http://www.pee.ufrj.br/teses/?Resumo=2000034001

5. Tudo que é de graça tem preço caro embutido de alguma forma e não declarado explicitamente, que talvez você não desejasse arcar.

6. Evite instalação de softwares gratuitos.
    Grande parte dos softwares que prestam serviços simples de áudio, vídeo, comunicação, recursos gráficos e etc. são grandes candidatos a vetores de informação social como forma de autosustentação econômica.
    Hardware hoje é barato.
    Se precisar usar software não confiável, eleja uma máquina só para isso.
    Reserve outra máquina para uso pessoal e privado apenas com o que for considerado seguro e necessário.

7. Utilize e pesquise ferramentas de segurança pagas e dê preferência a software pago de fonte segura.
    Lembre-se: nada trabalha de graça - não há milagre.

8. Criptografe o que for pessoal com entropia acima de 1024bits.
    E não mantenha a chave no mesmo ambiente...


eclipse: java: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder" or Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder

  >PROBLEM Using Eclipse, you try to run a simple logging test using "org.slf4j.Logger" like the sample below: package Test; im...