Wednesday, May 15, 2013
Webpage error details
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; Tablet PC 2.0)
Timestamp: Wed, 15 May 2013 22:03:02 UTC
Message: Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
Details: Error parsing near '
The cause for me was the following
I have reporting services installed in SharePoint 201 SP1. I have two reports and they both take parameters. The main report has parameters that use the parameter panel and the user specifies the parameters they want. The second report uses the parameter panel, but it should not since it is taking the parameters from a link on the main report that passes the parameter to the second report. Only one report can use the panel. The Back to the Parent Report link in Reporting Services doesn’t seem to support the scenario where both reports are using the parameter panel.
To reproduce the problem, I did the following.
1. Bring up the main report and run with some parameters.
2. Click the link that takes me to the secondary report.
3. Click the Back to the Parent Report button.
4. Click the link that takes me to the secondary report
Change the parameter type on the second report to be Hidden. This will cause the parameter to not be shown in the UI when the second report is shown. This will fix the problem. Another solution is for the main report to not have any parameters, but that is usually not feasible since users typically will need to interact with it.
To change the parameter to Hidden do the following:
1. Open the report in Report Builder
2. Go to Report Data |
Parameters and expand the list of parameters
3. Right-click on each of the parameters and do the following
4. Select the Parameter Properties menu item.
5. Under the General tab change the Select parameter visibility radio button to Hidden.
Tuesday, May 14, 2013
Monday, May 13, 2013
I have summarized (not pretty diagrams or anything) the basics below for easy consumption. I also have a PowerPoint slide with nice architctural diagrams, etc. Click here to view the PowerPoint slides.
Below is an overview of what they offer:
- Excel Services
- Power view
- PowerPivot for SharePoint
- SQL Server
- Reporting Services
- Analysis Services
- Data Mining
- Master Data Services
- Data Quality Services
- Integration Services
- Data Warehousing
- SQL Server Data Tools (formerly BI Development Studio)
- Data Explorer for Excel
- PowerPivot for Excel
- Power View for Excel
- Data Mining for Excel
Below are some more details on the specific products
- For both SharePoint and Excel
- Part of Microsoft SQL Server Reporting Services 2012
- It provides an ad-hoc visualization experience in SQL Server Reporting Services
- Power View provides intuitive data visualization of PowerPivot models and SQL Server Analysis Services (SSAS) tabular mode databases
- Users can edit views or create new ones
- Very interactive
- Very easy to use
- Encourages Data Exploration and Visualization
- Also available in Excel as an Add-in
- Click to filter technology
- Automatically creates data model
- For both SharePoint and Excel
- PowerPivot is an add-in that lets end users gather, store, model, and analyze large amounts of data in Excel
- Use Excel to create a PowerPivot
- Save to SharePoint for others to access.
- Viewable in the SharePoint Gallery
- In-Memory technologies allows working with Millions of rows of data
- Mashup data from different data sources
- Create Pivot tables, Charts, and PKIs on millions of rows of data.
- Aimed at Advanced PowerUsers, but more likely will be used by developers
- Integrated into SharePoint
- Provides designer as well
- Left-click Drill down into data
- Types of visualizations
- Highly Interactive
- Limited customization of the look and feel of report or dashboard
- Uses OLAP data sources
- Special designer accessible via SharePoint to design everything
- Excel Services on the other hand is a very power user-friendly technology. Those familiar with Excel and PivotTables should take very little time to be able to build very sophisticated reports. SharePoint 2010 renders Excel reports and dashboards as web pages which makes this technology very easy to deploy.
- Good choice for self-service BI scenarios
- Show an Excel sheet or workbook on a SharePoint site in a web part.
- Can show Power View Excel sheets on the web and keep the high level of interactivity.
Data Mining add-in for Excel
- Uses Analysis Services on the backend
- Excel User interface
- Other advanced algorithms
Data Explorer add-in for Excel
- Still in preview status
- Explore data
- Brings new data sources to Excel import options.
- Like SSIS, but for Excel
- ETL tool for Excel
- Extract: DB, Excel, Text file, web, OData, SharePoint lists, Active Directory, Multiple data source support
- Transform: Cleanse, apply business rules, aggregate, merge, append, mashups, Navigate through data (even joined data)
- Load: Load data into Excel once massage it to what we want
Master Data add-in for Excel
- Connects to MDS data sources / models
- Allows end users to create and maintain data in MDS using Excel
Monday, April 29, 2013
Once you have a table created in your application you may want to show that data in another web application. You can do that use a POST or GET request. For our purposes we will be doing a GET request since it doesn't require any coding and is easiest to play with.
Here are the API docs. Of particular interest are the following:
- api_authenticate-- you will need this to get the authToken if you are not logged in
- gen_results_table -- does all the heavy lifting
- do_query -- use this to create your own custom query instead of a existing view
According to the sample here, you can embed the QuickBase on your page by doing something like the following:
Finding the parameters we need using the UIWhile this is pretty easy to do the stuff above, you need to know what to put for the placeholders in red. First thing I recommend is log into www.quickbase.com using your favorite browser. Click on the tab for the application you want to access. Next click on one of the reports. Now take note of the url. It should map pretty closely to the following:
POTENTIAL MAJOR SECURITY ISSUE:
In the above example, we will not be getting the authToken (QuickBase calls it a ticket) and instead assume that you are already logged into QuickBase.com. However, if you are trying to display the QuickBase.com on your own web page and you will be using a functional account for QuickBase.com instead of each user that comes to your website having a QuickBase.com login also you will need to get the authToken programmatically. Read this discussion on how you might do this. The short answer is you COULD (but SHOULD NOT) pass the username and password via the url in the browser's address bar, because this is dangerous because even HTTPS does not hide urls stored in browser history. Thankfully, the url is enrypted from everyone except the browser and server computers. The url will be on the QuickBase.com log files, but they already have access to your data so it should not be an issue.
Friday, April 12, 2013
Pattern recognition is "the act of taking in raw data and taking an action based on the category of the pattern". Most research in pattern recognition is about methods for supervised learning and unsupervised learning.
Pattern recognition aims to classify data (patterns) based either on a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. This is in contrast to pattern matching, where the pattern is rigidly specified.
A wide range of algorithms can be applied for pattern recognition, from simple naive Bayes classifiers or k-nearest neighbor algorithm to powerful neural networks.
Facial Recognition uses Pattern Matching
Image and good info can be found at: http://www.docentes.unal.edu.co/morozcoa/docs/pr.php
genetic algorithms (1950s)
decision trees (1960s)
support vector machines (1980s).
Data mining commonly involves four classes of tasks:
· Clustering - is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data.
· Classification - is the task of generalizing known structure to apply to new data. For example, an email program might attempt to classify an email as legitimate or spam. Common algorithms include decision tree learning, nearest neighbor, naive Bayesian classification, neural networks and support vector machines.
· Regression - Attempts to find a function which models the data with the least error.
· Association rule learning - Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
Rewriting or Graph Rewriting may be useful as well
Machine Learning and Pattern Recognition are really the same thing, but from different angles
Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm.
· Supervised learning generates a function that maps inputs to desired outputs. For example, in a classification problem, the learner approximates a function mapping a vector into classes by looking at input-output examples of the function.
· Unsupervised learning models a set of inputs, like clustering.
· Semi-supervised learning combines both labeled and unlabeled examples to generate an appropriate function or classifier.
· Reinforcement learning learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in the form of rewards that guides the learning algorithm.
· Transduction tries to predict new outputs based on training inputs, training outputs, and test inputs.
· Learning to learn learns its own inductive bias based on previous experience.
Wednesday, March 27, 2013
This solution works for SharePoint Online (O365). It uses the Claims based authentication for SharePoint Online, but using a standard ClientContext for SharePoint 2010+ should work great also. For the Claims based authentication I am using this library, but you should be able to use this library. If you want to better understand Claims based authentication required for SharePoint Online (O365), then I highly recommend reading this series of articles.The document library you are working on does need to have versioning disabled I believe. If your document library requires versioning, then disable it, then run this code, then enable it again. Please verify that no version history is lost on a copy of your data before doing that though, since I have not tested that.
public static void ChangeCreatedModifiedInfo(string webUrl, string serverRelativeUrlOfFileToChange, DateTime? createdDate, DateTime? modifiedDate, Dictionary<string, object> keyValues)
using (ClientContext clientContext = ClaimClientContext.GetAuthenticatedContext(webUrl))
// FYI server relative path is: "/support/CSS/Reports/output.xlsx"
var uploadedFile = clientContext.Web.GetFileByServerRelativeUrl(serverRelativeUrlOfFileToChange);
// if not checked out then check it out
if (uploadedFile.CheckedOutByUser == null)
ListItem listItem = uploadedFile.ListItemAllFields;
// set created and modified date if they are specified
listItem["Created"] = createdDate.Value.ToString(); // i.e. "6/5/2012 10:19"
listItem["Modified"] = modifiedDate.Value.ToString(); // i.e. "6/5/2012 10:19"
// set properties based on values passed in
if (keyValues != null)
foreach (var keyValue in keyValues)
listItem[keyValue.Key] = keyValue.Value;
Not to bad once you have the solution.
Tuesday, March 26, 2013
It seems amazing to me that Microsoft did such a poor job of giving developers a good way to upload large documents to Microsoft SharePoint Online (aka Office 365 or SharePoint 365). Ideally I would like to use the
method that is part of the CSOM (Client Side Object Model), but this does not work with SharePoint Online and only seems to work on SharePoint 2010 (probably 2013, but have not tested). I did get
Microsoft.SharePoint.Client.File uploadedFile = docLib.RootFolder.Files.Add(newFileFromComputer);
to work on SharePoint Online, but it not very useful because it is limited to files of about 3MB in size.
After lots of trial and error, I figured out that the most reliable was to do it is using a standard PUT request and passing the Claims Authentication cookies that are required to make most any request to SharePoint Online. This works well unless you are debugging and you still have the default exception warning enabled. In that case, for large files that take more than 60 seconds to upload you will get a message similar to this:
The CLR has been unable to transition from COM context 0x1fe458 to COM context 0x1fe5c8 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
Depending on what you are doing this may be okay. In my case, I don’t care if my command line application waits for a long request to continue. This is caused by using the STA threading attribute on the main method of the application.
Below are two methods (one just an overload that calls the other basically) that upload a document to a shared library in SharePoint Online. It should also work on SharePoint 2010 plus if you replace the ClaimClientContext with a standard ClientContext that is needed for SharePoint 2010+. I’ll leave that to you to try on your own for now. You’ll notice I have also added functionality to change the created and modified date of the files after they are uploaded. For details (and source code click here). Or if you just want to understand more about Claims based authentication, sample code, options, etc definitely check out here.Lastly, the keyValues dictionary is just a dictionary of actual internal field names (see CAML field names) as keys and the values for each field are the values of the dictionary.
public static void AddDocument(string fileToUpload, string webUrl, string docLibInternalName, string docLibUIName, string documentSetName, string filenameToSaveAs, DateTime? createdDate, DateTime? modifiedDate, Dictionary<string, object> keyValues, int timeoutInMilliseconds)
string urlToSaveAs = webUrl + "/" + docLibInternalName + "/" + documentSetName + "/" + filenameToSaveAs;
AddDocument(fileToUpload, webUrl, urlToSaveAs, createdDate, modifiedDate, keyValues, timeoutInMilliseconds);
// Uploads most any size file to SharePoint Online (O365) using Claims Authentication. It does NOT use CSOM, and instead uses a standard PUT request
// that has the cookies from the Claims based authentication added to it.
// This solution is based on http://stackoverflow.com/questions/15077305/uploading-large-files-to-sharepoint-365
// To get the claims authentiation cookie, this solution requires: http://msdn.microsoft.com/en-us/library/hh147177.aspx#SPO_RA_Introduction
// or if you want to get the cookies for claims authentication antoher way, you can use
public static void AddDocument(string fileToUpload, string webUrl, string urlToSaveAs, DateTime? createdDate, DateTime? modifiedDate, Dictionary<string, object> keyValues, int timeoutInMilliseconds)
//For example: byte data = System.IO.File.ReadAllBytes(@"C:\Users\me\Desktop\test.txt");
byte data = System.IO.File.ReadAllBytes(fileToUpload);
// get the cookies from the Claims based authentication and add it to the cookie container that we will then pass to the request
CookieCollection cookies = ClaimClientContext.GetAuthenticatedCookies(webUrl, 200, 200);
CookieContainer cookieContainer = new CookieContainer();
// make a standard PUT request
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = HttpWebRequest.Create(urlToSaveAs) as HttpWebRequest;
request.Method = "PUT";
request.Accept = "*/*";
request.ContentType = "multipart/form-data; charset=utf-8";
request.CookieContainer = cookieContainer;
request.AllowAutoRedirect = false;
request.UserAgent = "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)";
request.ContentLength = data.Length;
request.ReadWriteTimeout = timeoutInMilliseconds;
request.Timeout = timeoutInMilliseconds;
using (Stream req = request.GetRequestStream())
req.ReadTimeout = timeoutInMilliseconds;
req.WriteTimeout = timeoutInMilliseconds;
req.Write(data, 0, data.Length);
// get the response back
HttpWebResponse response = null;
response = (HttpWebResponse)request.GetResponse();
Stream res = response.GetResponseStream();
using (StreamReader rdr = new StreamReader(res))
string rawResponse = rdr.ReadToEnd();
catch (Exception ex)
if (response != null)
// NOTE: the file that was uploaded is still checked out and must be checked in before it will be available to others.
// The method includes a checkin command. If the method below is removed for some reason,
// a checkin method call should be added here so that the file will be available to all (that have access).
// NOTE: The method calls add keyValues passed in as well. These would need to be done also if the method is removed.
UriBuilder urlBuilder = new UriBuilder(urlToSaveAs);
string serverRelativeUrlToSaveAs = urlBuilder.Path;
ChangeCreatedModifiedInfo(webUrl, serverRelativeUrlToSaveAs, createdDate, modifiedDate, keyValues);