Server errors on molmovdb

Q:
I was trying a couple of tools: Morph Server and RigidFinder and in both cases I get a server error indicating that files could not be written. Specifically, RigidFinder complains: "Cann’t write to file ‘/tmp/rid74285/upfile1.pdb’." and Morph Server says "Can’t create morph directory!". If this site is still being maintained, please consider addressing these issues.

A:
Thank you for your interest in our servers and for letting us know of
problems you’ve encountered.

Rigidfinder’s disk filled up. I cleared some space and it should be
working again.

Molmovdb, however, is a more complicated issue. It needs an upgrade
since it is more than 15 years old. Occasionally, we simply roll back
to a previous version but then any submissions would be lost. We also
cannot guarantee when the next roll back would be.

We apologize for any inconvenience this may have caused. We provide
related software in our FAQs for those who are interested.
http://www2.molmovdb.org/wiki/info/index.php/Related_Resources

Query Regarding GRAM eQTL & MTC

Q:
I am contacting you as the corresponding author for the paper: "GRAM: A generalized model to predict the molecular effect of a non-coding variant in a cell-type specific manner." PLoS genetics 15.8 (2019): e1007860.

I would like to express my thanks to you and your group for developing & publishing GRAM. I have recently tested it out and the results have been most interesting.

I have begun to work with eQTL analysis only recently and as a result, I was wondering what you would recommend as a multiple testing correction method for GRAM score based eQTL analysis?

From the literature I have seen that standard multiple testing correction methods such as Bonferroni & Benjamini-Hochberg have be considered too conservative for regular eQTL analysis as they do not take linkage disequilibrium into account, and several permutation testing based approaches have been published specifically for eQTL as a result (e.g. eigenMT). However, as you have demonstrated GRAM score based eQTL to be able to differentiate the regulatory effects of variants in linkage disequilibrium, I am unsure whether such methods would be appropriate here.

A:
One of the application scenarios of using GRAM is fine-mapping, which suppose that you have a list of eQTL and its LD associated mutations. If you don’t have eQTL and want to try it on eQTL identification, maybe one way is you compare the gram score with a normally distributed background (use tens of thousands of background/random selected mutations) and infer a p-value of the GRAM score of a variant relative to the background, then use BH or FDR method to do the multi-testing correction.

Frankly speaking, this is a very great point to extend our GRAM. We may also consider testing this recently. The most computation-intensive part of this to calculate deepbind score for background variants, which will take a long time if we want to test millions of background variants. If you have any feedback, further questions or preliminary findings regarding this, please feel free to let us know.

Question re. Yale Morph Server

Q:
I’m having trouble with the multi-chain morph server. My protein includes chains that are designated as an upper case “A” and lower case “a”. When I upload the PDB file and specify all chains including A and a, the resulting Morph PDB file does not contain the lower case chain a. Is there any way to fix this problem?

A:
Thank you for reaching out to us regarding the morph server. It is perhaps the case that the server is internally case-insensitive with regard to the chain. If possible, I would suggest changing chain "a" to a different letter in the PDB input file (ie, by changing "a" to "B" using a simple script). Then, once you get your output from the server, you can again change chain "B" to the original label chain "a".

List of 321 high confidence SCZ-associated genes from Wang et al. 2018

Q:
I read your excellent work in Wang et al. 2018, and am wondering whether you could kindly share the list of 321 high confidence SCZ-associated genes. We are studying SCZ iPSC-derived interneurons and this information would be helpful for us to understand which DE gene may be causal in our system.

A:
It should be at: http://resource.psychencode.org

Request for data from Zhang and Gerstein NAR (2003)

Q:
I recently came across your paper, "Patterns of nucleotide substitution, insertion and deletion in the human genome inferred from pseudogenes."

I’m interested in the substitution rates in human pseudogenes. Figure 2A from your paper (pasted below) plots these rates. Would you be able to send me these rates as a table?

Additionally, has your group calculated the substitution rates for more families of pseudogenes? (The NAR 2003 paper only analyzed ribosomal protein pseudogenes sequences.) I tried poking around psiDR, but wasn’t not able to find this type of information readily available.

These substitution rate matrices would be very helpful for my research.

A:
see: http://www.pseudogene.org/indel-nar
(via http://papers.gersteinlab.org/papers/indel-nar)

Full set of tQTLs and isoQTLs from Wang et al. 2018

Q:
As a lab, our general interests lie in the intersection between transcriptomics, neurogenetics, and genetic diagnosis. As such, we have made great use of the publicly available PEC resources on https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource.psychencode.org%2F&data=02%7C01%7Cshuang.liu%40yale.edu%7Caa9d9436ceb6478ec71208d8142dea62%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C637281534720419443&sdata=eu4DyEY%2BNUJuueEbj4YWFeWfOYoao6j%2B%2F1rqyq1DSUc%3D&reserved=0, in particular the QTL data. However, I have not been able to locate the full set of isoQTLs and tQTLs without any p-value/FDR filtering, as is available for eQTLs. Is there somewhere I can access this easily? Or does access to the full set of tQTLs and isoQTLs require an application to Synapse?

A:
Currently we don’t provide access to the full set. The full set is very large and we need to discuss where we should share these data. I will let you know once we have any updates.

Questions regarding eqtl calls

Q:
I am trying to reproduce the eQTL calls published here with file name: Full_hg19_cis-eQTL. I’m having some difficulty reproducing the eQTL calls and in particular the P-values, and wanted to figure out where my pipeline isn’t matching.

1) I am unsure of the earth selection process on the super covariates sets. Currently, we try to reproduce the covariates selection using one hot matrix encoded covariates superset mentioned in the supplementary material (page 7) of this publication . We are curious on what covariates are selected (e.g.: brain bank covariates include multiple institutes, are all of them selected, or just some of them?).

2) We are unsure on which GTEx pipeline for EQTL calls were employed by the publication. We are currently using the GTEx pipeline mentioned here, but am wondering if the paper uses an older version of the GTEx pipeline that was previously available?

3) Another question is which datasets are fed into the eqtl calls? We are currently working with the capstone genotype datasets and TPM expression matrix published here with file name: DER-02_PEC_Gene_expression_matrix_TPM. We are wondering if the Genotype/Expression filtering were done directly on these files?

4) The last question is when we call eqtl using FastQTL, the nominal p-values (that have passed FDR < 0.05) are much larger compared to the p values your study published here with the file name: DER-08a_hg19_eQTL.significant (so it looks like we’re incredibly underpowered). I’ve attached a figure to illustrate the nominal p values reported in your files versus computed by us. We have used the Capstone genotypes and expression files (as described above), and though we should be somewhat underpowered relative to your study (because we are missing the GTEx genotypes/expression files, which need separate agreements), I’m not sure it accounts for the difference in p value magnitudes. I was wondering if you have any thoughts on which part of the pipelines we may have implemented incorrectly that could lead to such a huge difference?

A:
Here are some responses to your questions.

I am unsure of the earth selection process on the super covariates sets. Currently, we try to reproduce the covariates selection using one hot matrix encoded covariates superset mentioned in the supplementary material (page 7) of this publication . We are curious on what covariates are selected (e.g.: brain bank covariates include multiple institutes, are all of them selected, or just some of them?).
Here are the covariates we are using, you can also find the description in supplemental materials in our paper (http://papers.gersteinlab.org/papers/capstone4/index.html):

Top 3 genotyping principal components
Probabilistic Estimation of Expression Residuals (PEER) factors
Genotyping array platform
Gender
Disease status

We are unsure on which GTEx pipeline for EQTL calls were employed by the publication. We are currently using the GTEx pipeline mentioned here, but am wondering if the paper uses an older version of the GTEx pipeline that was previously available?
The detailed description of our eQTL pipeline could be found in Fig. S31 in our paper http://papers.gersteinlab.org/papers/capstone4/index.html.

Another question is which datasets are fed into the eqtl calls? We are currently working with the capstone genotype datasets and TPM expression matrix published here with file name: DER-02_PEC_Gene_expression_matrix_TPM. We are wondering if the Genotype/Expression filtering were done directly on these files?
You can find details in Fig. S31 in our paper http://papers.gersteinlab.org/papers/capstone4/index.html.

The last question is when we call eqtl using FastQTL, the nominal p-values (that have passed FDR < 0.05) are much larger compared to the p values your study published here with the file name: DER-08a_hg19_eQTL.significant (so it looks like we’re incredibly underpowered). I’ve attached a figure to illustrate the nominal p values reported in your files versus computed by us. We have used the Capstone genotypes and expression files (as described above), and though we should be somewhat underpowered relative to your study (because we are missing the GTEx genotypes/expression files, which need separate agreements), I’m not sure it accounts for the difference in p value magnitudes. I was wondering if you have any thoughts on which part of the pipelines we may have implemented incorrectly that could lead to such a huge difference?
I am not sure which genotype file you are using. But we cannot share the merged genotype file since we integrated some GTEx samples in the file. We are also using different covariates. So your results will be different from ours if the genotype, phenotype and covariates inputs are not the same.

Yale Morph Server job not being completed

Q:
I uploaded my PDB files on the multi-chain morph server, the job ID is b499364-832. It has been two days but the results page still says that the job is not yet complete. Is there any problem with my morph? I understand that my files are very large so it may take a long time to finish, but is it possible that it takes this long? I would really appreciate your assistance.

A:
Files generated by the multi-chain server can be found in
http://www.molmovdb.org/uploads/<job ID>/
Yours are in
http://www.molmovdb.org/uploads/b499364-832/

Looking at it, some files seem to be missing.
For e.g., a complete run would look like
http://www.molmovdb.org/uploads/b649592-16743/

There is a job running and it seems to be related to yours
"/usr/bin/perl ./multi.pl b499364-832 –chains=WCBAXHGFEDJLMNOPQRYAIS
–nframes=8 –email=seiga –engine=CNS –debug"

The short of it is that your job is probably stuck since you seem to
have submitted it 6 days ago judging from the submit time. Note that
we cannot guarantee the full functionality of this service as it is
from 2005 and has not been fully maintained since. Occasionally, we
may roll back but that would mean you will need to resubmit your job.

Coordinates of TADs and enhancer-promoters pairs from the PsychEncode dataser

Q:
I am developing a pipeline to analyze the Hi-C data from the PsychEncode project. As a sanity check, I want to map the enhancer-Transcription start sites (TSS) pairs from the file http://resource.psychencode.org/Datasets/Integrative/INT-16_HiC_EP_linkages.csv to the TADs inferred by the Psychencode project in the file http://resource.psychencode.org/Datasets/Derived/DER-18_TAD_adultbrain.bed.

Looking at the enhancer and TSS, the TSS have very "round" coordinates (e.g. 90000, 630000, etc). Just to confirm, those are still genomic coordinates, right?

Also, are the coordinates of the TADs genomic coordinates, or Hi-C bins? I assumed that was the case, but could not find any of the enhancer-TSS pairs in the same TAD, which is what I expected.

A:
RE your questions:
Looking at the enhancer and TSS, the TSS have very "round" coordinates (e.g. 90000, 630000, etc). Just to confirm, those are still genomic coordinates, right?
-> Yes. I used the resolution for Hi-C (in 10kb resolution), not the actual TSS. So you can simply overlap the TSS coordinates with the actual promoter coordinates to link genes to enhancers.

Also, are the coordinates of the TADs genomic coordinates, or Hi-C bins? I assumed that was the case, but could not find any of the enhancer-TSS pairs in the same TAD, which is what I expected.
-> TAD coordinates should also be the genomic coordinates, not Hi-C bins. It’s odd that you didn’t find enhancer-TSS pairs in the same TAD because we found >70% of E-P links are located within TADs..