Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Since becoming a ranking factor in June 2021, Core Web Vitals have created new opportunities for improvement for many websites. Based on user experience, the metrics which measure page performance are designed to offer a closer look at how websites are coded and how that code renders a useable web page. Google has made it fairly easy to monitor these metrics through reporting in Google Search Console. The dedicated report helps site owners/managers identify opportunities for improvement in the area of Core Web Vitals. So, why does page performance matter? The longer a page load time, the higher the bounce rate. According to Google, if a page load time increases from 1 second to 3 seconds, bounce rate increases 32%. If a page load time increases from 1 second to 6 seconds, bounce rate increases by 106%. We all know that high bounce rates indicate that user’s aren’t happy. Understanding the Core Web Vitals report. The CWV report is based on three metrics: LCP, FID, and CLS. LCP (Largest Contentful Paint) LCP is the amount of time it takes to render the largest content element visible in the viewport from when the user requests the URL. Typically, the largest element is and image or video. FID (First Input Delay) FID is the time from when a user first interacts with your page to the time when the browser responds. Interactions can be a link clicked or button tapped, etc. CLS (Cumulative Layout Shift) CLS refers to the unexpected shifting of webpage elements while the page is still loading. Here Google uses the sum total of all layout shift scores for any shift that isn’t from user interaction. Shifting elements tend to be fonts, images, videos, forms, and buttons. Only indexed URLs can show in the report and only URLs with enough data for the metric will be included. The page status shown is the status of its most poorly performing metric. Navigating the report. The CWV report shows URL performance grouped by status, metric type, and URL group which is groups of similar web pages. The Core Web Vitals overview page chart shows how the URLs on your site perform based on historical user data. Toggle the Poor, Needs improvement, or good tabs to get data on specific statuses. By clicking through to the summary page for either Mobile or Desktop you will see the status and issues for all URLs that Google has data for. The chart shows the count of URLs with a given status on a given day. You can toggle the tabs above the chart to show the number of issues in a given status. By selecting a row in the table below the chart, you will see details about URL groups affected by a selected issue as well as a sample of the related URLs in each issue category. If you click a URL in the examples table, you will see more information about that URL and also a list of similar URLs. If you would like help in determining how Core Web Vitals may be affecting your site ranking, contact Allegro for a full website audit.
OPCFW_CODE
package com.pierless.space.core; import com.pierless.space.display.DisplayObject; import org.junit.Assert; import org.junit.Before; import org.junit.Test; /** * Created by dschrimpsher on 10/17/15. */ public class TestGalacticCoordinate3D { private static final double EPSILON = 0.0001; public EquatorialCoordinate[] equatorialCoordinates; public GalacticCoordinate3D[] galacticCoordinate3Ds; public Double[] distances; @Before public void setup() { equatorialCoordinates = new EquatorialCoordinate[5]; galacticCoordinate3Ds = new GalacticCoordinate3D[5]; distances = new Double[5]; //Vega equatorialCoordinates[0] = new EquatorialCoordinate(); equatorialCoordinates[0].setRightAscension(279.2347); equatorialCoordinates[0].setDeclination(38.7837); galacticCoordinate3Ds[0] = new GalacticCoordinate3D(); galacticCoordinate3Ds[0].setLongitude(67.4482); galacticCoordinate3Ds[0].setLatitude(19.2372); distances[0] = 7.68; //parsecs //Barnard's's Star equatorialCoordinates[1] = new EquatorialCoordinate(); equatorialCoordinates[1].setRightAscension(269.4521); equatorialCoordinates[1].setDeclination(4.6934); galacticCoordinate3Ds[1] = new GalacticCoordinate3D(); galacticCoordinate3Ds[1].setLongitude(31.0087); galacticCoordinate3Ds[1].setLatitude(14.0626); distances[1] = 1.8328; //parsecs //Betelgeuse's's Star equatorialCoordinates[2] = new EquatorialCoordinate(); equatorialCoordinates[2].setRightAscension(88.7929); equatorialCoordinates[2].setDeclination(7.4071); galacticCoordinate3Ds[2] = new GalacticCoordinate3D(); galacticCoordinate3Ds[2].setLongitude(199.7873); galacticCoordinate3Ds[2].setLatitude(-8.9586); distances[2] = 1.97; //parsecs //Galactic Center's galacticCoordinate3Ds[3] = new GalacticCoordinate3D(); galacticCoordinate3Ds[3].setLongitude(0.); galacticCoordinate3Ds[3].setLatitude(0.); distances[3] = 8000.; //parsecs // <TR><TD></TD><TD>51 Peg b</TD><TD>15.36</TD><TD>344.366577</TD><TD>20.768833</TD><TD></TD><TD>0.47200</TD><TD>54</TD></TR> //51 Peg b equatorialCoordinates[4] = new EquatorialCoordinate(); equatorialCoordinates[4].setRightAscension(344.366577); equatorialCoordinates[4].setDeclination(20.768833); galacticCoordinate3Ds[4] = new GalacticCoordinate3D(); galacticCoordinate3Ds[4].setLongitude(90.0627); galacticCoordinate3Ds[4].setLatitude(-34.7273); distances[4] = 15.36; //parsecs } @Test public void testConvert() { GalacticCoordinate3D test = new GalacticCoordinate3D(); test.convertToGalatic(equatorialCoordinates[0], distances[0]); Assert.assertTrue("Long is wrong " + test.getLongitude(), test.getLongitude() - galacticCoordinate3Ds[0].getLongitude() < EPSILON); Assert.assertTrue("Lat is wrong " + test.getLatitude(), test.getLatitude() - galacticCoordinate3Ds[0].getLatitude() < EPSILON); Assert.assertTrue("Dist is wrong " + test.getDistance(), test.getDistance() - distances[0] < EPSILON); Assert.assertTrue("X is wrong" + test.getX(), test.getX() + 7.023452031 < EPSILON); Assert.assertTrue("Y is wrong" + test.getY(), test.getY() - 2.9812785104 < EPSILON); test.convertToGalatic(equatorialCoordinates[1], distances[1]); Assert.assertTrue("Long is wrong " + test.getLongitude(), test.getLongitude() - galacticCoordinate3Ds[1].getLongitude() < EPSILON); Assert.assertTrue("Lat is wrong " + test.getLatitude(), test.getLatitude() - galacticCoordinate3Ds[1].getLatitude() < EPSILON); Assert.assertTrue("Dist is wrong " + test.getDistance(), test.getDistance() - distances[1] < EPSILON); test.convertToGalatic(equatorialCoordinates[2], distances[2]); Assert.assertTrue("Long is wrong " + test.getLongitude(), test.getLongitude() - galacticCoordinate3Ds[2].getLongitude() < EPSILON); Assert.assertTrue("Lat is wrong " + test.getLatitude(), test.getLatitude() - galacticCoordinate3Ds[2].getLatitude() < EPSILON); Assert.assertTrue("Dist is wrong " + test.getDistance(), test.getDistance() - distances[2] < EPSILON); test = new GalacticCoordinate3D(); test.convertToGalatic(equatorialCoordinates[4], distances[4]); Assert.assertTrue("51 Peg b Long is wrong " + test.getLongitude(), Math.abs(test.getLongitude() - galacticCoordinate3Ds[4].getLongitude()) < EPSILON); Assert.assertTrue("51 Peg b Lat is wrong " + test.getLatitude(), Math.abs(test.getLatitude() - galacticCoordinate3Ds[4].getLatitude()) < EPSILON); Assert.assertTrue("51 Peg b Dist is wrong " + test.getDistance(), Math.abs(test.getDistance() - distances[4]) < EPSILON); Assert.assertTrue("51 Peg b X is wrong" + test.getX(), Math.abs(test.getX() + 15.3599908) < EPSILON); Assert.assertTrue("51 Peg b Y is wrong" + test.getY(), Math.abs(test.getY() + 0.016808774) < EPSILON); } @Test public void testGalaticCenter() { GalacticCoordinate3D test = new GalacticCoordinate3D(); test.setDistance(distances[3]); test.setLongitude(galacticCoordinate3Ds[3].getLongitude()); test.setLatitude(galacticCoordinate3Ds[3].getLatitude()); Assert.assertTrue("Long is wrong " + test.getLongitude(), test.getLongitude() - galacticCoordinate3Ds[3].getLongitude() < EPSILON); Assert.assertTrue("Lat is wrong " + test.getLatitude(), test.getLatitude() - galacticCoordinate3Ds[3].getLatitude() < EPSILON); Assert.assertTrue("Dist is wrong " + test.getDistance(), test.getDistance() - distances[3] < EPSILON); Assert.assertTrue("X is wrong" + test.getX(), test.getX() < EPSILON); Assert.assertTrue("Y is wrong" + test.getY(), test.getY() - 8000 < EPSILON); } }
STACK_EDU
This article explains how the Smoothwall Filter can prevent users from accessing objectionable content through a search engine, and how they can be modified to force SafeSearch. Search engines allow users to search for multiple sites. They can return content that was not initially searched for (through search suggestions). They also often load a snippet of each website, which may trigger the content filter. A user’s search terms may also be useful for identifying users trying to bypass the web filter by using similar, but non-objectionable, search terms. Search Term Extraction The Smoothwall supports extracting search terms from URLs. These are analyzed by the dynamic content filter in the same way that regular web pages are. This allows searches to be categorized and reported upon. This is applied to all major search engines (Google, Bing, Yahoo), and certain other services (for example, Flickr, Shutterstock, Liveleak). It is also not a feature that needs to be enabled, however it can be disabled by either whitelisting a domain, or due to it being served over HTTPS without a decrypt and inspect policy in place. A blockpage will be displayed if a user’s search term is categorized against a blocked policy, as it would with regular web content. If a search term is not blocked by a policy, the results page is still scanned by the filter, and a block page may be displayed as a result of this. To ensure that search term extraction is working on a search engine: - Ensure the domain isn’t currently the target of a whitelist policy. - If the domain is served over HTTPS, then ensure that a HTTPS decrypt and inspect policy has been enabled. - Search terms should now be showing up in the search term reports. To see these in realtime, go to Reports > Realtime > Search terms. Users browsing via the can be forced to use SafeSearch by default. There are two content modifications available which will achieve this: - URL Modification - Cookie Modification URL modification works by appending a short string to the end of each query to a given web service. Each service uses a different flag to check if SafeSearch has been enabled. Bing for instance, appends &adlt=strict to a request. In the web filter logs this will show up as: Unfortunately only the second scenario presented here can be effectively modified by the . This is done by rewriting a cookie to show that the user has requested not to see inappropriate content. Both of these methods are included in the content modification Force SafeSearch. SafeSearch via CONNECT header Google provide a method to force SafeSearch across all their domains. By rewriting any requests to google.* to forcesafesearch.google.* all requests are treated as though SafeSearch is enabled. This cannot be unset by the user as well. Finally, it does not require a HTTPS decrypt and inspect policy to be enabled. This content modification works for: Google, Bing & Pixabay. To learn how to enable SafeSearch, see our knowledge base article, Activating SafeSearch on All Major Search Engine Websites Using Content Modification. Instant Results Removal Certain search engines provide search suggestions on a per character basis. These cause problems as these search suggestions aren’t necessarily what the user was searching, and this can lead to the user being flagged in safeguarding reports. This can be rectified by blocking the URL from which instant results are loaded. To remove search suggestions from Google: - Create a new web filter policy, where the What is Search Suggestions, and the Action is Block. - When typing in the search bar in Google now, you should no longer see suggestions appearing underneath the search bar.
OPCFW_CODE
Values of columns in first table refer to column names in second table. How to pull the values from second table? First post, have been searching extensively for a month for an answer to this and figured I would just ask the experts. I have populated a table with patient accounts that have received services at a hospital. I am pulling columns listing the performing physician on each of their procedures. The values of those columns are sequence numbers that point me to a set of columns in a second table. The columns in the second table actually contain the physician identifiers I need. Example: TABLE 1 Account: Phys_Proc1 Phys_Proc2 PhysProc3 PhysProc4 PhysProc5` Patient1 2 5 1 4 5 Patient2 1 3 3 4 0 Patient3 2 0 0 0 0 TABLE 2 Account: Physician1 Physician2 Physician3 Physician4 Physician5 Patient1 500123 500456 500789 600123 600456 Patient2 400321 500700 300876 456789 987654 Patient3 300500 800700 0 0 0 I need up update the records in TABLE 1 with the values from TABLE 2 where the value in TABLE 1 refers to the column name from TABLE 2. EXAMPLE Patient1 had procedure1 performed by '500456' (Phys_Proc1's value is "2" which refers to the Physician2 field in TABLE 2.) Any help would be greatly appreciated. Even a hint at this point would give me a direction to look in. Pointing me toward a specific function name to search is better than what I have now (nothing.) I tried an extensive CASE statement but it didn't pull the values for each patient account and pulled the values from TABLE 2 for the first account and applied it to all patient records. Personally I would redesign to correctly normalized tables and then you would not have this nightmare. I would create a patient table, a physician table and procedure table and then a table that has PatientId, PhysicianID, ProcedureId, and date. UPDATE Table2 SET Physc_Proc1 = CASE WHEN Phys_Proc1 = 1 Then Table2.Physician1 CASE WHEN Phys_Proc1 = 2 Then Table2.Physician2 CASE WHEN Phys_Proc1 = 3 Then Table2.Physician3 CASE WHEN Phys_Proc1 = 4 Then Table2.Physician4 CASE WHEN Phys_Proc1 = 5 Then Table2.Physician5 ELSE NULL END, Physc_Proc2 = CASE WHEN Phys_Proc2 = 1 Then Table2.Physician1 CASE WHEN Phys_Proc2 = 2 Then Table2.Physician2 CASE WHEN Phys_Proc2 = 3 Then Table2.Physician3 CASE WHEN Phys_Proc2 = 4 Then Table2.Physician4 CASE WHEN Phys_Proc2 = 5 Then Table2.Physician5 ELSE NULL END, PhyscProc3 = CASE WHEN PhysProc3 = 1 Then Table2.Physician1 CASE WHEN PhysProc3 = 2 Then Table2.Physician2 CASE WHEN PhysProc3 = 3 Then Table2.Physician3 CASE WHEN PhysProc3 = 4 Then Table2.Physician4 CASE WHEN PhysProc3 = 5 Then Table2.Physician5 ELSE NULL END, PhyscProc4 = CASE WHEN PhysProc4 = 1 Then Table2.Physician1 CASE WHEN PhysProc4 = 2 Then Table2.Physician2 CASE WHEN PhysProc4 = 3 Then Table2.Physician3 CASE WHEN PhysProc4 = 4 Then Table2.Physician4 CASE WHEN PhysProc4 = 5 Then Table2.Physician5 ELSE NULL END, PhyscProc5 = CASE WHEN PhysProc5 = 1 Then Table2.Physician1 CASE WHEN PhysProc5 = 2 Then Table2.Physician2 CASE WHEN PhysProc5 = 3 Then Table2.Physician3 CASE WHEN PhysProc5 = 4 Then Table2.Physician4 CASE WHEN PhysProc5 = 5 Then Table2.Physician5 ELSE NULL END FROM Table2 INNER JOIN Table1 ON Table2.Account = Table1.Account Note: I haven't tried this for syntax. I hope this gives you an idea on how to proceed further. You are truly a god amongst men. This is exactly what I needed. A few minor tweaks to match my environment and it worked perfectly. Thank you, you have no idea how much of a help you have been.
STACK_EXCHANGE
This is the beginning of a new month, and with it, here’s a quote from a friend: Relationships are like plants. They need to be treated with care: Water them too much and you may damage them. Water them too little and they may dry up. Here is another lovely one from Nduli [adapted from some book]: Do you know what’s more dangerous than a villain? A villain who thinks they are a hero. A person like that, there’s nothing they won’t do, and they will always find themselves an excuse. Do you know what’s more annoying than an ignorant person? An ignorant person who thinks they “know”. Such a person will always come up with grandiose ideas founded on nothingness. And should such a person have the means, waste someone else’s time in their pursuit of “nothingness”. Finally [wrt to quotes], here’s one by Ralph Waldo Emerson on making each day a masterpiece (from [James] Clear’s newsletter): Finish every day and be done with it. For manners and for wise living it is a vice to remember. You have done what you could; some blunders and absurdities no doubt crept in; forget them as soon as you can. Tomorrow is a new day; you shall begin it well and serenely, and with too high a spirit to be cumbered with your old nonsense. This day for all that is good and fair. It is too dear, with its hopes and invitations, to waste a moment on the rotten yesterdays. Free And Open Source(FOSS) Work - There’s a Pycon Kenya event coming up this month. I submitted a talk: “Managing Python Dependencies in GNU Guix” and it got accepted here. That said, for this event, I don’t like that I had to create a “sessionize” account to submit a talk :( - Worked some more on the GNU Emacs’ tissue interface. Hopefully, I’ll get something worth blogging about this week. In the meantime, I’ll clean up that package and have my team mates use it. - A team mate received an offer from industry. That’s exciting news, and we wish him all the best :) - Started writing my grant proposal for my Master’s Programme disseration. You can view it here. I want to finish it by the close of this week. - For some school syndicate group, we met up and tried to help one of our members with her project. I’ll be doing less of this– I have too much on my plate. - Last semester’s results are out and I passed. Thus far, I’ve been consistent in my performance, and I want to keep things that way. - Made some progress with: “The Sense of Style” by Steven Pinker. At my current location in the book, Pinker is discussing some grammatical constructs. This section is making further progress with the book difficult. Personal & Miscellaneous - Worked out issues with my partner. Key thing was clear and coherent communication. - Mental note to self: Consuming social media in any of its forms is not a form of entertainment. - Tasks-wise, I can only work on one major task in a day. The rest would just be “busy work”. - Listened to Mike Kayihura’s album: “Zuba”. - Lights out this past (entire) weekend. - Fuel shortage here in Nairobi. - A recurring theme during some random hangout on Saturday: “What does friendship mean to you?” - How do you offer help and support to dear ones without imposing yourself or ideologies on them? - I trimmed down my GNU Emacs configs. I moved from “Helm” to “Vertico”. At one point, I broke my configs and I spent some time trouble-shooting my GNU Emacs :( - In life, instead of “mindlessly” acting, consider first seeking out the right questions to ask. Then act by answering these questions. Life is easier and more fulfilling if you ask the “right” questions. - [E-mail] Seeking smarter people than you - The counter-intuitive rise of Python in scientific computing - Moving Python’s bugs to GitHub [LWN.net] - Thoughts on software-defined silicon [LWN.net] - CPython, C standards, and IEEE 754 [LWN.net] - When and why to deprecate filesystems [LWN.net] - Collective Ownership of Code and Text - Kenya gets sixth submarine fibre cable worth sh44 billion
OPCFW_CODE
Capture everything you know about a visitor without identifying them personally Let’s remind ourselves of how the data structure works in GA4 by referring to this visual: - It’s about Users and Events (rather than hits) - Everything you send into GA4 is an Event with some Parameters - You have to tell GA about these parameters, otherwise they are hard to find - These defined event parameters are called Custom Dimensions / Custom Metrics So far, we haven’t looked too much at user properties but these are very powerful in GA4. User properties can help you do segmenting because you don’t need to keep defining and applying properties to that segment. A user property is permanently tied to that user so they can always be found in that cohort. This means you don’t need to worry about data sampling or running complicated reports because those properties are predefined and tied to the data. In GA4, you can define up to 25 user properties, not counting your event custom dimension. That works out at a 150% increase in the number of dimensions compared to GA3 (50 vs 20). Setting up User Properties The actual setup works in a similar way to how you set up custom dimensions from events and has two parts: - Send properties about a user into Google Analytics (via parameters or GTM) - Tell Google Analytics to accept these parameters and generate reports Google have provided some setup guidelines to help you, although their guide will show you how to do it with the tracking code and in this lesson we will see how to do it with GTM. Step 1: Send user properties into Google Analytics It is up to you what user properties are useful to you, but some examples would be profession, height, high-value customer, nationality, etc. Essentially, things that you can use to define a user that don’t change very often. These shouldn’t be things that are unique to a single user though e.g. credit card details, email address, etc. To find some user properties that you want to create, go to the data layer screen in your Tag Manager account to see what values you are tracking and find the ones you want to recreate in GA4. Or, on the User Properties page of your Analytics account, click Create first user property. Give the property a name and a description and click Create. With the new property created you can Edit, Archive or Mark as NPA (no personalized advertising). Click New User Property to add as many new properties as you need. Step 2: Tell Google Analytics to accept these parameters In GTM, open your GA Configuration Event and under User Properties, click Add Row. Copy the property names from those you just created in GTM. Paste them into the Property Name field. In the Value field, copy the values from the data layer you looked at earlier in GTM. Paste these into the Value field. Testing your Setup In GTM, enter Preview mode, then go to your site and navigate to test if your activity is being tracked correctly. Then, go to the debugger in Analytics to see your data being tracked. - User Properties are a great addition to GA4 - You can define up to 25 user properties and use to build audiences - This will make life easier than the old way of segmenting in GA3 - It’s especially useful for brands who have lots of data to analyze - When you set this up you need your GA settings + properties added to code
OPCFW_CODE
/* This file is a part of @mdn/browser-compat-data * See LICENSE file for more information. */ import assert from 'node:assert/strict'; import bcd from '../index.js'; import query from './query.js'; import { joinPath, isBrowser, isFeature, descendantKeys, } from './walkingUtils.js'; describe('joinPath()', () => { it('joins dotted paths to features', () => { assert.equal(joinPath('html', 'elements'), 'html.elements'); }); it('silently discards undefineds', () => { assert.equal(joinPath(undefined, undefined, undefined), ''); assert.equal(joinPath(undefined, 'api'), 'api'); }); }); describe('isBrowser()', () => { it('returns true for browser-like objects', () => { assert.equal(isBrowser(bcd.browsers.firefox), true); }); it('returns false for feature-like objects', () => { assert.equal(isBrowser(query('html.elements.a')), false); }); }); describe('isFeature()', () => { it('returns false for browser-like objects', () => { assert.equal(isFeature(bcd.browsers.chrome), false); }); it('returns true for feature-like objects', () => { assert.equal(isFeature(query('html.elements.a')), true); }); }); describe('descendantKeys()', () => { it('returns empty array if data is invalid', () => { assert.strictEqual(descendantKeys(123).length, 0); assert.strictEqual(descendantKeys('Hello World!').length, 0); assert.strictEqual(descendantKeys(null).length, 0); assert.strictEqual(descendantKeys(undefined).length, 0); }); });
STACK_EDU
package com.github.nhirakawa.hyperbeam.geometry; import static org.assertj.core.api.Assertions.assertThat; import static org.assertj.core.api.Assertions.assertThatThrownBy; import org.assertj.core.data.Offset; import org.junit.Test; public class Vector3Test { private static final Offset<Double> OFFSET = Offset.offset(0.0001); @Test public void testDotProduct() { Vector3 first = Vector3.builder().setX(1).setY(3).setZ(-5).build(); Vector3 second = Vector3.builder().setX(4).setY(-2).setZ(-1).build(); double dotProduct = first.dotProduct(second); assertThat(dotProduct).isCloseTo(3, OFFSET); } @Test public void testCrossProduct() { Vector3 a = Vector3.builder().setX(2).setY(3).setZ(4).build(); Vector3 b = Vector3.builder().setX(5).setY(6).setZ(7).build(); Vector3 c = a.cross(b); assertThat(c.getX()).isCloseTo(-3, OFFSET); assertThat(c.getY()).isCloseTo(6, OFFSET); assertThat(c.getZ()).isCloseTo(-3, OFFSET); } @Test public void testGetI() { Vector3 vector = Vector3.builder().setX(10).setY(20).setZ(30).build(); assertThat(vector.get(0)).isCloseTo(vector.getX(), OFFSET); assertThat(vector.get(1)).isCloseTo(vector.getY(), OFFSET); assertThat(vector.get(2)).isCloseTo(vector.getZ(), OFFSET); assertThatThrownBy(() -> vector.get(1000)) .isInstanceOf(IllegalArgumentException.class); assertThatThrownBy(() -> vector.get(-1)) .isInstanceOf(IllegalArgumentException.class); } @Test public void testAdd() { Vector3 a = Vector3.builder().setX(100).setY(200).setZ(300).build(); Vector3 b = Vector3.builder().setX(50).setY(40).setZ(30).build(); Vector3 add = a.add(b); assertThat(add.getX()).isCloseTo(a.getX() + b.getX(), OFFSET); assertThat(add.getY()).isCloseTo(a.getY() + b.getY(), OFFSET); assertThat(add.getZ()).isCloseTo(a.getZ() + b.getZ(), OFFSET); } }
STACK_EDU
Please contact Unifiedcomms Group HR directly at +603-5163 2952 or e-mail us at [email protected] if you would like to learn more about the positions available. System Specialist / System Test Engineer A System Specialist / System Test Engineer is responsible for reviewing software design, preparing test plans, performing integration testing and certifying that the delivered software is in accordance with the product specifications and customer requirements. All these activities relate to this position’s role in the software development process. In addition, the System Specialist / System Test Engineer is also responsible for preparing and producing installation packages and guides to aid technical implementation engineers in smoothly and efficiently performing on-site system installation and integration activities. The System Specialist / System Test Engineer also plays a role post-implementation, and serves as the primary contact person and representative of the software development team for software-related issues/incidents/problems. This position works closely with technical support engineers to ensure that performance and availability of the implemented system under his/he care is actively maintained if not improved. - Reviewing software design specifications. - Prepare test plan(s) based on documented software design specifications. - Perform integration testing on internal testbed with internal test tools, including simulation testing of software components and validation of simulated environment and production of test cases to support testing. - Simulation test environment setup and management. - Maintenance and enhancement of internal test tool for integration testing. - Production system site planning including module lay out, access control lists, placement, installation of database servers. - Conceiving, compiling and producing the installation package and installation guide for use by on-site technical implementation engineers to implement systems. - Overall production system site ownership/oversight during system implementation phase, and serving as primary project representative for the software development team. - Assisting the technical implementation team in the actual installation and performing tests on-site as required, to validate results and to address software and system-design related information requests /incidents /problems. - Serves as primary representative and contact point for software development team for the purposes of software maintenance and technical support (Tier-3 support). - Carries out validation of third-party vendor software and internal software upgrades and updates. - Responsible for overall configuration management and configuration control of production systems, knowledge center/specialist for all site-specific information on production systesm. - Carries out change request and technical proposal review and providing feedback to technical product management team. - Training and briefing on software modules and systems to technical implementation and technical product management teams as and when required. - Continuous improvement and enhancement of implementation process and documentation. - Research into new concepts and tools to improve quality of product and quality of support as well as to increase efficiency of system implementation activities. - Responsible for regular house-keeping for the production systems assigned. - Degree in Computer Science, Information Technology, Telecommunications Engineering or equivalent. - 1 – 2 years of experience in software development or software/system testing. Applicants with any of the following capabilities/knowledge would be at an advantage: - Knowledge of C/C++ or Java. - Knowledge of SS7 signaling. - Knowledge of UNIX shell script, PERL or Python. - Strong willingness to learn. - Able to work in a team. - Able to creatively utilize various open source technologies. Interested candidates, please e-mail your comprehensive resume together with a covering letter to [email protected]. Alternatively, post it to: Unifiedcomms Group HR Unified Communications (OHQ) Sdn Bhd Level 2, The Podium, Wisma Synergy No 72, Persiaran Jubli Perak Seksyen 22, Shah Alam 40000 Selangor Malaysia
OPCFW_CODE
#include <espp/wifi.h> #include <esp_event_loop.h> namespace lamp { WiFi::WiFi() { DEBUG << "Init wifi"; tcpip_adapter_init(); ESP_ERROR_CHECK(esp_event_loop_init(WiFi::StaticEventHandler, this)); wifi_init_config_t cfg = WIFI_INIT_CONFIG_DEFAULT(); ESP_ERROR_CHECK(esp_wifi_init(&cfg)); ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_STA)); } Data WiFi::accessPointSsid() const { if(!_isAccessPoint) { return {}; } wifi_config_t config; ESP_ERROR_CHECK(esp_wifi_get_config(ESP_IF_WIFI_AP, &config)); return {config.ap.ssid, config.ap.ssid_len}; } Data WiFi::stationSsid() const { if(!_isStation) { return {}; } wifi_config_t config; ESP_ERROR_CHECK(esp_wifi_get_config(ESP_IF_WIFI_STA, &config)); return {config.sta.ssid}; } void WiFi::SetAccessPoint(const Buffer& ssid) { INFO << "Set access point: SSID" << ssid; Mutex::LockGuard lock(_mutex); ESPP_CHECK(_state == State::none); if(ssid.empty()) { DEBUG << "Disable access point"; ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_STA)); _isAccessPoint = false; } else { DEBUG << "Enable access point"; ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_APSTA)); wifi_config_t wifi_config; wifi_config.ap.ssid_len = ssid.CopyTo(wifi_config.ap.ssid, 32, true); wifi_config.ap.max_connection = 5; wifi_config.ap.authmode = WIFI_AUTH_OPEN; _isAccessPoint = true; ESP_ERROR_CHECK(esp_wifi_set_config(ESP_IF_WIFI_AP, &wifi_config)); } } void WiFi::SetConnection(const Buffer& ssid, const Buffer& password) { INFO << "Set connection to" << ssid << "with password length" << password.length(); Mutex::LockGuard lock(_mutex); ESPP_CHECK(_state == State::none || _state == State::wait_start || _state == State::started); if(ssid.empty()) { DEBUG << "Disable station"; _isStation = false; } else { wifi_config_t config = {}; ssid.StringCopyTo(config.sta.ssid, 32); password.StringCopyTo(config.sta.password, 64); _isStation = true; ESP_ERROR_CHECK(esp_wifi_set_config(ESP_IF_WIFI_STA, &config)); } } bool WiFi::Start() { INFO << "Start WiFi"; Mutex::LockGuard lock(_mutex); if(_state != State::none) { DEBUG << "Invalid state"; return false; } _state = State::wait_start; ESP_ERROR_CHECK(esp_wifi_start()); return true; } bool WiFi::Connect() { INFO << "Connect WiFi station"; Mutex::LockGuard lock(_mutex); if(!_isStation) { DEBUG << "Station isn't inited"; return false; } switch(_state) { case State::wait_start: DEBUG << "Wait start. Append wait connection"; _state = State::wait_start_connect; break; case State::started: DEBUG << "Connect station"; _state = State::wait_connect; ESP_ERROR_CHECK(esp_wifi_connect()); break; default: DEBUG << "Invalid state"; return false; } return true; } bool WiFi::Disconnect() { INFO << "Disconnect WiFi station"; Mutex::LockGuard lock(_mutex); if(!_isStation) { DEBUG << "Station isn't inited"; return false; } switch(_state) { case State::wait_start_connect: DEBUG << "Wait start and connect. Reset wait connect"; _state = State::wait_start; break; case State::wait_connect: case State::connected: DEBUG << "Station connected. Disconnect"; ESP_ERROR_CHECK(esp_wifi_disconnect()); break; default: DEBUG << "Invalid state"; return false; } return true; } bool WiFi::Stop() { INFO << "Stop wifi"; Mutex::LockGuard lock(_mutex); if(_state == State::none) { DEBUG << "Wifi wasn't started"; return false; } ESP_ERROR_CHECK(esp_wifi_stop()); return true; } void WiFi::OnStarted() { INFO << "On started"; Mutex::LockGuard lock(_mutex); switch(_state) { case State::wait_start: DEBUG << "WiFi started"; _state = State::started; break; case State::wait_start_connect: DEBUG << "WiFi started and wait connection"; _state = State::wait_connect; ESP_ERROR_CHECK(esp_wifi_connect()); break; default: DEBUG << "Invalid state"; } } void WiFi::OnConnected() { INFO << "On connected"; Mutex::LockGuard lock(_mutex); switch(_state) { case State::wait_connect: _state = State::connected; break; default: ERROR << "Invalid state"; } } void WiFi::OnGotIp() { INFO << "On got IP"; Mutex::LockGuard lock(_mutex); switch(_state) { case State::connected: _hasIp = true; break; default: ERROR << "Invalid state"; } } void WiFi::OnDisconnected() { INFO << "On disconnected"; Mutex::LockGuard lock(_mutex); switch(_state) { case State::wait_connect: DEBUG << "Wait connect reset"; _state = State::started; break; case State::connected: DEBUG << "Reset connected state"; _state = State::started; break; default: ERROR << "Invalid state"; } _hasIp = false; } void WiFi::OnStop() { INFO << "On stop"; Mutex::LockGuard lock(_mutex); _state = State::none; } esp_err_t WiFi::StaticEventHandler(void* context, system_event_t* event) { WiFi* instance = reinterpret_cast<WiFi*>(context); instance->HandleEvent(*event); return ESP_OK; } void WiFi::HandleEvent(const system_event_t& event) { switch(event.event_id) { case SYSTEM_EVENT_STA_START: OnStarted(); break; case SYSTEM_EVENT_STA_STOP: OnStop(); break; case SYSTEM_EVENT_STA_CONNECTED: OnConnected(); break; case SYSTEM_EVENT_STA_GOT_IP: OnGotIp(); break; case SYSTEM_EVENT_STA_DISCONNECTED: OnDisconnected(); break; // case SYSTEM_EVENT_AP_START: // OnStarted(); // break; // case SYSTEM_EVENT_AP_STOP: // OnStop(); // break; case SYSTEM_EVENT_AP_STACONNECTED: OnStationConnected(event.event_info.sta_connected); break; case SYSTEM_EVENT_AP_STADISCONNECTED: OnStationDisconnected(event.event_info.sta_disconnected); break; case SYSTEM_EVENT_AP_STAIPASSIGNED: OnStationIpAssigned(event.event_info.ap_staipassigned); break; default: DEBUG << "Unprocessed network event with id" << event.event_id; } } void WiFi::OnStationConnected(const system_event_ap_staconnected_t&) { DEBUG << "station connected to" << "access point"; Mutex::LockGuard lock(_mutex); if(!isStarted()) { ERROR << "Unexpected event"; return; } } void WiFi::OnStationDisconnected(const system_event_ap_stadisconnected_t&) { DEBUG << "station disconnected from" << "access point"; Mutex::LockGuard lock(_mutex); if(!isStarted()) { ERROR << "Unexpected event"; return; } } void WiFi::OnStationIpAssigned(const system_event_ap_staipassigned_t& event) { DEBUG << "station got ip from" << "access point"; Mutex::LockGuard lock(_mutex); if(!isStarted()) { ERROR << "Unexpected event"; return; } DEBUG << "Assign ip" << ip4addr_ntoa(&event.ip); } }
STACK_EDU
04-19-2018 03:40 PM I am using SAS Studio. I'm trying to compute the odds ratios and confidence intervals for all of the levels within a variable. For example, I am looking at how maltreatment exposure predicts chronic pain. Maltreatment exposure has four levels: 1, 2, 3, and 4, to indicate frequency of occurrence. I want to see what the odds are for developing chronic pain if someone has a maltreatment frequency of 1, a maltreatment frequency of 2, and so on. I have tried both PROC LOGISTIC and PROC GLIMMIX, but I only get one odds ratio, instead of an odds ratio for each level of the variable. My predictors include age (8-19 years), gender (0 or 1), placement type (1 or 1), and maltreatment exposure (1,2,3,and 4). My outcome is chronic pain (0 or 1). I would like to look at the odds of chronic pain for all of the levels of all of my variables. TITLE1 "Single-Level Logistic Model Predicting Chronic Pain"; PROC GLIMMIX DATA=work.pain NOCLPRINT NAMELEN=100 METHOD=QUAD (QPOINTS=15) GRADIENT; CLASS PID; * Descending makes us predict the 1 instead of the default-predicted 0; MODEL chronicpain (DESCENDING) = age_c Gender GroupHome RndChron_c / SOLUTION LINK=LOGIT DIST=BINARY DDFM=Satterthwaite ODDSRATIO; ESTIMATE "Intercept" intercept 1 / ILINK; * ILINK is inverse link (to un-logit); ESTIMATE "Chronic Pain if Age=9" intercept 1 age_c 1 / ILINK; ESTIMATE "Chronic Pain if Age=10" intercept 1 age_c 2 / ILINK; Thanks for your help! 04-19-2018 05:47 PM Your code doesn't include any reference to maltreatment and your model doesn't include any reference to PID... Assuming that all your independent variables are classes, you could start with: TITLE1 "Single-Level Logistic Model Predicting Chronic Pain"; PROC GLIMMIX DATA=work.pain NOCLPRINT NAMELEN=100 METHOD=QUAD (QPOINTS=15) GRADIENT; CLASS age_c Gender GroupHome RndChron_c; MODEL chronicPain (event="1") = age_c Gender GroupHome RndChron_c / SOLUTION LINK=LOGIT DIST=BINARY DDFM=Satterthwaite; lsmeans age_c / oddsratio; lsmeans gender / oddsratio; lsmeans groupHome / oddsratio; lsmeans RndChron_c / oddsratio; RUN;
OPCFW_CODE
I was born, raised, and educated in the beautiful north-east of England, first in the small seaside town of Blyth in south-east Northumberland, and later in the thriving city of Newcastle-upon-Tyne. After graduating in 2014 from Newcastle University, I moved north of the border to live and work in Edinburgh. From a young age, I've had a keen interest in all things technical, although I've never been able to pinpoint the start of this fascination. I enjoy writing code, designing and developing programs and games, and expanding my repertoire of languages and technologies. Most of my code projects are available on GitHub. When I'm AFK, I spend as much time as physically possible travelling the world, immersing myself in local cultures, and adding to my ever-expanding collection of Hard Rock Café t-shirts. When I'm not on the road (and sometimes when I am), I spend an unhealthy amount of time and money on my other two vices; third-wave coffee and single-malt Scotch. I currently live in Edinburgh with my better half and our two dogs; Charlie and Sid. I spend a lot of my free time running and Canicross-ing, and I'm also currently learning to speak German. In between everything else, I also have a keen interest in behavioural economics, politics, and podcasts (my favourites include Freakonomics, Radiolab, Planet Money, and 99% Invisible). |IntelliJ IDEA||3 years||Proficient| |Visual Studio||2 years||Familiar| |VS Code||2 years||Familiar| Since University, the primary language I've used to develop has been Java (currently working with Java 11). I have experience of writing Java software in several independent programs, ranging from small independent projects to multi-million line codebases. I have knowledge of many data structures and algorithms, and can use them effectively. Most recently, I have developed microservice applications and integrated them into a wider vanilla-Java architecture using the Spring Framework. I started at FanDuel in autumn 2019, working primarily with Java microservices and Python APIs in the Account & Wallet sector Read more Account & Wallet Java Developer (2019 - present) Since joining FanDuel, I have been primarily working on a migration project to convert existing users in a third-party systems into our in-house solution. This has given me the opportunity to learn about the FanDuel project methodologies from the ground up, as well as having an immediate meaningful impact on a visible and important project for the company. I joined Avaloq as a software engineer in the spring of 2017, working primarily as a Java developer in one of the web banking teams. Read more AFP Web Banking Java Developer (2017-2019) I was the senior developer, and SME, for several key areas within our project, responsible for implementing new innovations and managing the maintenance of these areas. I developed strong relationships with key customers, implemented new functionality to meet client needs, and steered the components through various upgrade cycles and improvements. Scrum Master (2019) I was the primary scrum master within my team, following a variation of the SAFe Scaled Agile methodology. My role involved mentoring my team, handling and prioritising customer issues and requests, and managing the delivery and scheduling of our array of supported software versions. I rejoined CGI as a graduate in the summer of 2014, and began working in various roles in the Government sector, including Java development, performance, and team management. Read more Scottish Government - AFRC Java Developer (2016-2017) After spending over a year learning and working as a performance analyst and team lead, I changed roles to work deeper in the development cycle of the system. I'm currently working as a Java developer in a team of six, using JBoss BRMS and Fuse. I work using a test-driven development process, using JUnit for unit tests and EclEmma for code coverage to provide assurances for my code quality. Other software and technologies used include Oracle SQL, SoapUI, Maven, Subversion, and Jenkins. Performance Team Lead (2015-2016) Having worked as a performance analyst for several months, I was promoted to team lead for non-functional testing within the AFRC project, training three new members in performance testing and analysis, and managing the performance test schedule. I advised senior client management on performance metrics and directions, and worked closely with database and deployment managers to ensure a continued smooth transition to live. Performance Analyst (2014-2015) After my initial few months at the Scottish Government with systems integration and working my way up to team lead, I joined the non-functional team working as a performance analyst. Here, I learned about load-testing and performance metrics, and rebuilt the project's performance testing suite in JMeter from the ground up. The role required me to liaise closely with senior client management, helping me develop important negotiation and client communication skills. 2010-2014 Newcastle University At Newcastle, I was awarded first-class marks for my work in Mobile Development, Programming for Games, Graphics for Games, Games Development, Server-side Web Development, and Cryptography. I also studied Advanced Programming, Database Technologies, Computer Networks, Modelling and Computation, Algorithm Design and Analysis, Internet Technology, and Software Project Management. My final dissertation project was focused on designing and developing a travel-planning website and mobile application. This was developed using PHP and Android, as well as SQL and server technologies, and was awarded upper second-class marks. During my degree studies, I spent a year in industry working for the IT consulting firm Logica (now CGI) . At Logica, I worked as a business and test analyst within the Financial Services sector, developing business scenarios for the client, and designing development criteria for our team of off-shore developers.
OPCFW_CODE
Please note that HP IDOL OnDemand is now HPE Haven OnDemand. The API endpoints have changed to Haven OnDemand. Please see the API documentation for more details. Have you had a play with Find yet? It's a basic HP IDOL OnDemand search interface. What you might not know is that Find was designed to be a customisable, extensible interface - a platform to give developers a head start for building IDOL OnDemand UIs. We've released Find on GitHub under the MIT license so that anyone can take the source code and use it to develop their own interfaces. What am I reading? This is the first blog post in the Complete Find Guide series. We'll be taking you through how to get started with developing interfaces based on Find, adding features, calling new APIs and even submitting patches back to HP Autonomy for inclusion in the master repo. Part 1: Getting Started with Find Find comes in two parts - the frontend and the backend. Both parts have a somewhat different technology stack: - Backbone.js - an MVC-ish framework to add some structure to the code - Underscore.js - nice helper functions like map, reduce, and each. Build to help you work with Backbone.js - jsWhatever - a collection of our formerly-internal, now-open-source utility functions for our stack - Bower - dependency management for the web - means we don't have to package all the libraries listed above along with the project. Instead we can just get them at build-time. There are some other libraries used in the frontend, of course, but those are the core ones. Have a look at bower.json to see what dependencies we've specified. You can find the frontend code in src/main/webapp/static - Spring MVC - server-side MVC framework for clean code separation. - Lombok - generates getters and setters for Java, thank goodness! As you've probably guessed from reading that, the backend is written in Java. This might not be the cool choice in 2015 (I'm a Node.js fan, personally), but it's a solid, dependable, mature language that a lot of developers are already familiar with. You can find the backend code in src/main/java Before we can start hacking about with Find we need to get our development environment set up. We're going to need to install a few tools: We need to download the source code from GitHub, so we're going to need a Git installation. In addition to just the basic git program, you may wish to consider installing: - Git for Windows - Formerly known as msysGit, this gives you a nice set of Unix command line tools and a Bash shell for Windows. - Atlassian SourceTree - A very nice Git UI, available for Windows and OSX. We need to install the latest Java JDK from Oracle. If you're on Linux then you can probably obtain this from a package manager - OpenJDK is fine. Next up, we need an installation of Apache Maven - scroll down the page for the tedious installation instructions. Make sure you add the mvn binary to your system path. We'll need to run Maven from the command line later. Finally, a much simpler one: Node.js. Download the installer and install it. Getting the source code Now that we've got our dev tools installed, we can grab the source code. If we just want to use Find as-is, we can just clone the main repository: - From a terminal/console window, cd to the directory you want to clone Find to. - git clone https://github.com/hpautonomy/find.git If you want to modify your own copy of Find, click the "Fork" button on GitHub, then copy the HTTPS clone URL from your new repo and git clone it. Building and running Find cd into the find directory that you just cloned from Git. We have a couple of prereqs to attend to before we can start the application. - cp ./src/main/filters/filter-dev.properties.example ./src/main/filters/filter-dev.properties This sets up your development config filters file, which is used by Maven. Don't worry about it for now. - Create a "home directory" for Find somewhere on your file system, e.g. "C:\dev\home\find" or "/opt/hp/find". This is used for storing the webapp configuration and log files. mvn jetty:run -Dhp.find.home=YOUR_FIND_HOME_DIRECTORY (e.g. `mvn jetty:run -Dhp.find.home=/opt/hp/find`) (Note: if you're behind an HTTP/HTTPS proxy, you will need to set two additional Java System Properties (the -D arguments): find.https.proxyHost and find.https.proxyPort. Otherwise, Find won't be able to communicate with IDOL OnDemand. Yes, this bit me while writing this blog post (╯°□°)╯︵ ┻━┻)) The console output should end with "Starting scanner at interval of 3 seconds" or similar. This means that the server has started successfully! Navigate to "http://localhost:8080/find" in your browser and you'll see the login screen. As we're in initial setup mode, Find will have created a config file in the home directory you gave it on the command line - open the generated config.json file in an editor of your choice and copy the randomly generated password into the login screen. On the Settings page we need to configure two things - our IDOL OnDemand API key and our admin username and password. Enter your API key, then click "Test Key". If it's successful, you'll get an "API Key OK" message and a list of indexes to search on. Select all the indexes that you want to use. Once you've entered a password for your admin user, click the big "Save Changes" button at the top of the screen. The config.json file will be updated with your settings. If we ever need to reset Find to a clean slate, deleting the config.json and restarting the server will put us back into initial setup mode. Click the "Logout from settings" button at the top of the screen. You'll see the main Find search screen. That's all for today! Thanks for reading! We now have the Find source code on disk and a local copy running. Next time we'll be looking at how to add a new page. You must be a registered user to add a comment here. If you've already registered, please log in. If you haven't registered yet, please click login and create a new account.
OPCFW_CODE
|Three days of testing in a home-cage environment to capture spontaneous behavior measures Measures of locomotor activity and sheltering See Loos2 downloads. - PhenoTyper model 3000 (Noldus Information Technology, Wageningen, The NETHERLANDS) - The cages (L=30, W=30, H=35 cm) are made of transparent Perspex walls with an opaque Perspex floor covered with cellulose-based bedding. - A feeding station and a water bottle are attached onto two adjacent walls. - A triangular-shaped shelter compartment (H=10 cm) made of non-transparent material has two entrances fixed in the corner of the opposite two walls. - The top unit of each cage contains an array of infrared LEDs and an infrared-sensitive video camera used for video-tracking. - EthoVision software (EthoVision HTP 188.8.131.52, based on EthoVision XT 4.1, Noldus Information Technology, Wageningen, The NETHERLANDS - AHCODA analysis software (Synaptologics BV, Amsterdam, The NETHERLANDS Figure 1. Schematic of home cage. Procedure: Automated home-cage observation - Mice are tested in the PhenoTyper cage to assess spontaneous behavior for 3 consecutive days. - Mice are introduced to the test cages in the second half of the subjective light phase (14:00h-16:00h); video tracking starts at the onset of the first subjective dark phase (19:00h). - X-Y coordinates of the center of gravity of each mouse (sampled at a resolution of 15 coordinates per second) are acquired and smoothed using EthoVision software and processed to generate behavioral parameters using AHCODA analysis software. - Move and arrest segments are separated by repeated running medians smoothing of X-Y coordinates (for details see Hen et al.). See Figure 2 for illustrated definitions of of endpoints. - Smoothing settings are chosen such that move segments represent gross movements of the center of gravity, e.g, locomotor activity or rearing; arrest segments reflect complete inactivity or minute movements of the center of gravity, e.g., grooming or eating; and shelter segments are recorded if the center of gravity of a mouse disappeared in the 2-cm zone drawn immediately in front of the shelter entrance. - A shelter segment is ended if the center of gravity is detected continuously for at least 7 samples (0.5 s). - Three additional zones are digitally defined: a Feeding zone, a Spout zone, and an OnShelter zone. - Mice which spend little time in the shelter (<60% of time in the shelter during light phase of days 2 and 3) in combination with being highly inactive outside the shelter (cumulative movement less than 2 cm per 5 min for >25% of time outside during light phase of days 2 and 3) are classified as sleeping outside the shelter and are excluded from the analyses. - Elements of behavior are identified by mouse-determined thresholds. - Short movements (turning or rearing against the wall) and long movements (when mice travel from one location in the cage to the next) are identified. - Mice frequently visit the shelter for a few seconds (short shelter visits) during bouts of activity; long shelter visits indicate resting or sleeping. - To improve the detection of spontaneous behavior in the home-cage, existing analysis methods are adapted to segment continuous behavioral observations into distinguishable behavioral elements (for review, see Benjamini et al). See Loos et al. for details. Figure 2. Segmentation of sheltering behavior and activity into elements. A representative track of ~17 min for a C57BL/6J mouse, dissected into elements by individually determined thresholds. Procedure: Analysis of spontaneous behavior - Activity bouts are defined (start with a long movement and stop when a long arrest segment is encountered, or a shelter visit exceeds the brief shelter visit threshold). Characteristics of activity bouts are binned in 12h time bins, and cumulative and mean duration and/or frequencies are calculated. - A habituation index for a given parameter is calculated by taking the ratio of a 12h time bin on day 3 over day 1. - A DarkLight index is calculated from the 12h time bin values on the third day: (dark value/dark value + light value)). - Activity patterns are analyzed in terms of the change in the proportion of time active in the hours preceding and following the shift in light phase. - The last and first 10 min of each dark and light phase are not included in parameters. Procedure: Analysis of within-strain variability - Principal component analysis (PCA) is performed with Varimax rotation on the data of individual mice for all 115 behavioral parameters after subtraction of strain means to focus on within-strain variability (missing data replaced by strain means). Note that three strains were dropped from the analysis due to low sample size, leaving eight strains. - Subject's scores on PC are estimated using regression. - ANOVA and PCA are performed with SPSS v 20.0. PCA identified 22 orthogonal PCs of within-strain variability across the entire dataset. - activity bouts - dark-light index - light-dark phase transitions - within-strain variability Benjamini Y, Lipkind D, Horev G, Fonio E, Kafkafi N et al. 2010. Ten ways to improve the quality of descriptions of whole-animal movement. Neurosci Biobehav Rev 34:1351-1365. Hen I, Sakov A, Kafkafi N, Golani I, Benjamini Y. 2004 The dynamics of spatial behavior: how can robust smoothing techniques help? J Neurosci Methods 133: 161-172.
OPCFW_CODE
Package on Windows 64-Bit Using package to install a .EXE on 64-bit Windows does not correctly detect that the package is already installed, and therefore re-installs. When using an .MSI installer, there is a fallback that searches for the uninstall string in the MSI if it can't find it in the registry. The fallback made me think the problem was "random" ... of course I had all but one package installing via .MSIs and didn't understand everything. The debug output that led me down this path: [2017-01-20T20:01:22+00:00] DEBUG: Failure to read property 'DisplayName' The registry uninstall location here only points to the 32-bit registry location for uninstall strings. https://github.com/chef/chef/blob/db57131ad383076391b9df32d5e9989cfb312d58/lib/chef/provider/package/windows/registry_uninstall_entry.rb#L82 @rneu31 are you using the 32bit or 64bit version of Chef? @btm Does the line of code I referenced get swapped out somewhere in the 64-bit version? We are definitely using the 64-bit Chef client on the boxes that the error occurred on. Side point not related to this issue in this case, the ChefDK downloads page has two x86_64 and i386 version but they both link to the same file. @rneu31 no, the line doesn't get swapped out, but I believe accessing HKEY_LOCAL_MACHINE\SOFTWARE from a 32bit or 64bit process gives you different results. From what I've just read, from a 64bit process you can read HKEY_LOCAL_MACHINE\SOFTWARE for the 64bit version and HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node for the 32bit keys. We currently only build 32bit ChefDK on Windows, so we serve the same package regardless of which architecture is requested. I will look into my issue further. Right now I added a bunch of logs in registry_uninstall_entry.rb. When iterating over the keys it shows every key except for the package that was just installed, which is weird, since I can eye-ball verify that it's in the registry. @btm What would cause Chef to not see that a registry key is present? When does Chef read from the registry? Currently, my .EXE installs, notifies a reboot.. machine reboots, Chef runs again, does NOT detect that is installed so it installs again, which triggers another reboot... then when Chef runs this time it detects it's installed and we're golden. It is almost as if after the first reboot Chef isn't re-reading the registry keys, the second install is not really doing anything productive except causing another reboot, which after another "extra" reboot causes Chef to see that the reg key is there. Does this trigger any thoughts?? This issue itself can be closed -- sorry for the noise. No worries about the issues. Happy to help. It should read the registry as necessary, there's no need to cache. I thought we had a a property on the windows_package resource for setting the registry key we search for, but it looks like the registry key that the uninstall string is under must exactly match the name of the resource. Are you sure these match. https://github.com/chef/chef/blob/master/lib/chef/provider/package/windows/registry_uninstall_entry.rb#L43 I printed/logged key a few lines prior to that, and manually checked the registry and compared w/ the printed results, and the one I cared about was missing.... until the weird re-install again and reboot yet again then it's found. After the install is done, before the reboot, have you verified the registry key? Installers finishing on the next startup is a common thing on Windows. Yoy should double check that package name matches HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall HKEY_LOCAL_MACHINE\Software\Wow6464Node\Microsoft\Windows\CurrentVersion\Uninstall I have some programs which add space at the end of package name. Tried to Install and uninstalled MSI and EXE package to reproduce issue https://gist.github.com/harikesh-kolekar/711bcf2c49271ae3762318465258a826 its execute successfully And also tried to install package multiple time but its install and restart at one time. https://gist.github.com/harikesh-kolekar/c78a8b18d38a9f82e0995ba3e005b0b8 Can you please provide cookbook and page to reproduce the issue? @rneu31 , please share the recipe and the steps to reproduce this issue. @harikesh-kolekar has tried with the scenario that you suggested, but the recipes worked successfully if the name of the package provided in the recipe matches with the name in the registry. Please check this recipe: https://gist.github.com/harikesh-kolekar/711bcf2c49271ae3762318465258a826#file-install-package-32-64-bit-L56 @harikesh-kolekar had to append (x64) to the package name while uninstall, since the package name in the registry has it. While during install, providing (x64) was not required. @NimishaS It seems like the package that I'm trying to install acts very inconsistently, which made it seem as if Chef was missing something that was in the registry. This can be closed. @btm , @sjvreddy, please close this issue
GITHUB_ARCHIVE
GSoC 2012 Midterm Interview Half-time for our Google Summer of Code projects - time to pause for a moment and see what has been accomplished in the last few weeks. We asked our students Edo Monticelli, Martin Hundebøll and Spyros Gaster to bring their work to a state of a working prototype, which can be found in their respective git repositories on https://git.open-mesh.org/ or on the wiki in their respective project pages (Edo: GSOC2012_BW, Martin: Fragmentation, Spyros: BackwardsCompatibility). We have also interviewed them about their experiences with the Google Summer of Code, which we don't want to hold back. :) Question: The midterm of your GSoC project has been reached. Can you describe in a few sentences what you have achieved so far and which tasks remain for the second half of this GSoC ? Martin: I am working with a new implementation of the fragmentation feature in batman-adv. The current implementation only fragments packets of type BATADV_UNICAST, but a more and more features are added to batman-adv, other types should be fragmented as needed. While we are at it, we also want to add support for more than two fragments per packet, and merging of fragments if they are forwarded on an interface with large enough MTU. So far, I have developed a working prototype of the new fragmentation. It is (main-)feature complete in the sense that our goals for the new fragmentation are implemented and work: packet-type-independent, multiple fragments, routing of fragments. The solution is based on a encapsulation-of-encapsulation, where (the encapsulating batman-adv) packets with size bigger than MTU, are split and encapsulated with a fragment header. Now I need to make the code SMP- and architecture-safe, and of course find and fix bugs. Also, my mentors will probably have a lot of suggestions that I need to consider and work with. Spyros: My project is extending the batman-adv protocol to be backwards compatible through the use of tvlv(Type Version Length Value) information messages. So far I have made a working prototype for the project which transfers the gateway announcement tvlv. Now I have to polish the existing code, do some bug-fixes (thank you for your remarks everyone) and finally include the right function calls in various places in the code so as tvlvs are part of the protocol. Edo: I am implementing in kernel space a protocol for the bandwidth measurement, in order to have a lightweight approximation of TCP behavior. At the moment the protocol is working with fixed size window and cumulative acknowledgment. What should be done in the next month is a lot of testing and bugfixing. Some features are still missing, like the possibility to choose if a node is sender or receiver. Question: Looking at the past weeks what have been your greatest challenges and how did you master them ? Martin: It is always wonderful to live in the world of SKB-pointers, where the whole thing may break, if you forget to (re)set a single pointer in the skb-struct. I have spent quite some time with printk's and skb->foo's :) Spyros: I had some trouble making myself comfortable with the linux kernel coding style and learn how to interpret the kernel panic logs. Even though I'm not a pro at both of them I have at some degree mastered them with a solid amount of help from the mentors and the community and some scolding of course. Edo: The worst moment was at the beginning, when I had to start coding with very small knowledge of the batman code and no experience of kernel-space programming. The greatest difficulties were bound to kernel related techniques and features, like workqueue and locks. I have been able to overcome difficulties by looking at the batman code that manages similar issues and with the community help (mentor and IRC people). Question: What has been the most exciting experience relating to your GSoC project so far (e.g. mastering a technique, learning new approaches, successes, etc) ? Edo: When the project worked. Also to solve some hard bug has been of great satisfaction! Martin: I find it very exciting that I can develop and contribute an entire feature to batman-adv (i.e. fragmentation). By being the author of such a feature, one feel responsible for it and get to take one step up the "batman-adv-developer-ladder". Spyros: Pretty much everything about gsoc has been exciting but if I have to pick just one aspect I choose the part that I'm working with others on a code-base written by them. So far I had only done university assignments which even though they were enlightening enough, they were just newbie-level example code compared to the GSoC requirements. Now I hope I get to see how the pros do it. Question: Could the batman-adv organisation (website, community, mentors, individual supporters, etc) have done anything different to facilitate your life as GSoC student ? Was there something you considered too complicated or even scary ? Spyros: No, the organisation has already provided more than enough for me. The mentors provide much of their time for feedback and tutoring meetings and the community is there when I have a question however stupid the question is. Edo: In general I found the batman organization adequate and of great help. Martin: I think my mentor(s) should visit me in Denmark. If not during the GSoC time, then at least within 2012. If I should mention one serious improvement also, it could be more assistance when defining the project goals and writing the application. Question: Do you have any advice, words of wisdom or valuable feedback you'd like to share with future batman-adv GSoC students (with regards to expectation, preparation and time consumption for example) ? Edo: I found effective to agree with my mentor and the batman community on the tasks to develop and how to develop them, so that the project could benefit of their advice. So the help of the community for my has been fundamental. Martin: If you want to be a GSoC-student with batman-adv in 2013, you might as well get started now. Download the batman-adv source, install it on three laptops and get your first mesh running. Then buy a book about SKB's and kernel development, and ask on IRC, if there are any low hanging fruits, that you can pick to become familiar with batman-adv. By getting familiar before the beginning of next years GSoC, you make it a lot easier to fulfill the goals! Spyros: Start early, never stop, familiarize yourself with everything first, listen to the mentors, and above all when in trouble and you cant find the answer online ask at the channel. You are in open source and the greatest thing is the community and the incredible geeks which are part of it. Oh and buy the mentors many beers when you see them, put that gsoc money to good use :P(kidding) Thanks a lot to the students for their good work, keep it up! The B.A.T.M.A.N. Team
OPCFW_CODE
Selected papers and talks on Deep Learning Theory Deep learning theory has made some good progress in the last years. Below is a personal (short) selection of papers and talks giving some theoretical understanding of Deep Learning (with a focus on feature learning). Many thanks to Lenaïc Chizat for many of the references. Neural tangent kernel In wide neural networks, with standard initialization, the behavior of the neural network (NN) is well understood and corresponds to a kernel regression. Let us denote by \(\theta_0\) the initial parameters and by \(f(\theta,x)\) the NN output with parameter \(\theta\). It has been shown that when trained with gradient descent (GD) the infinitely wide NN with standard initialization behaves like a kernel regression with kernel $$K(x,y)=\langle \nabla_\theta f(\theta_0,x), \nabla_\theta f(\theta_0,y) \rangle.$$ The kernel \(K(x,y)\) is called the neural tangent kernal (NTK). The NTK limit is nicely explained in these [Short video, Long video], and in this [paper]. A more detailed presentation can be found in this [paper]. Feature learning in wide 1-hidden layer neural networks Feature learning is considered as one of the major ingredient of the success of deep learning. In the NTK regime mentioned above, no feature learning occurs. This suggests that, in the wide asymptotic, the scaling of standard initialisation is not appropriated. Instead, the scaling corresponding to hydrodynamic limit allows for feature learning. 1-Hidden layer NN in the hydrodynamic regime has been intensively investigated recently. Some interesting feature learning phenomenons have been exhibited. Feature learning in wide 1-Hidden layer NN [video] In the paper Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss, Chizat and Bach analyse the evolution of the gradient flow corresponding to the limit of GD with infinitesimal gradient steps. In the considered setting, they essentially show that, after a transient kernel regime, the NN converges to a max-margin classifier on a certain functional space. This can be interprated as a max-margin classifier on some learned features. From a statistical perspective, it follows that the NN classifier adapts (at least) to the intrinsic dimension of the problem. Wide 1-Hidden layer NN learns the informative directions [video] In the same direction, in the paper When Do Neural Networks Outperform Kernel Methods? Ghorbani et al. show that, compared to Kernel methods, wide 1-hidden layer NN is able to learn the informative directions in the data, and thereby to avoid the curse of dimensionality. Deep is better even in linear models When the activation funtion is the identity \(\sigma(x)=x\), neural networks reduce to a simple linear model \(f\big(\theta=(W_1,\ldots,W_L),x\big)=W_1\ldots W_L \,x\). Yet, GD on this model can lead to interesting solutions. For example, Noam Razin and Nadav Cohen show in Implicit Regularization in Deep Learning May Not Be Explainable by Norms that, in a simple problem of matrix completion, GD will implicitely minimize the effective rank of the solution [video]. Hierarchical learning of the features and purification In a series of two papers Allen-Zhu and Li investigate how the features are learnt and what is the impact of adverserial training on the learnt features. The results of these two papers are presented in this [video]. Forward feature learning / backward feature correction In the paper Backward Feature Correction: How Deep Learning Performs Deep Learning, they show, for resnet-like NN, how the hierarchy of features are learnt via a progressive correction mechanism during the SGD. This result might be connected to the observation of Maclach and Shalev-Shwartz that, at least in a fractal toy model, GD will find good solution, if shallow networks are already good [Is depper better only when shallow is good?, video]. In the paper Feature Purification: How Adversarial Training Performs Robust Deep Learning Allen-Zhu and Li consider data modeled by a sparse decomposition on a (hidden) dictionary. They show how adversarial learning lead to a purification of the learned features, leading to a more sparse (and robust) representation of the data. Greg Yang and Edward Hu describe in Feature Learning in Infinite-Width Neural Networks some possible non-degenerated limits of wide deep NN. They exhibit some scaling where feature learning occurs and they explain how the limit distribution can be computed with the tensor program technique. More precisely, in the wide limit with Gaussian random initialisation, every activation vector of the NN has iid coordinates at any time during training, with a distribution recursively computable (in principle at least). [video] This paper builts on a series of 3 previous papers on tensor programs Tensor programs I Tensor programs II Tensor programs III
OPCFW_CODE
Postmortem of two projects and 100 Central government will now 100% fund the development projects in the north eastern region the orphan roads are the ones which lie between two states and were. You can more effectively design and execute future projects when you take advantage of lessons learned through experience 100 ways to be a better boss launches. Postmortem reviews: purpose and approaches in birk et al use two techniques to carry out the postmortem to try postmortem reviews at the end of projects. Postmortem: downtime, july 5, 2017 over the next two days, the team communicating constantly including new projects. Postmortem animal activity is an of postmortem animal attacks on human corpses will be evaluated by should encompass at least an area of 100 m. Experimental gameplay project postmortem fall 2005 arnab basu on that making 100+ mini-games was an unreasonable goal that would result in two of the rules. 100 + million for the remaining two projects individual practitioner moving between projects methods for analyzing postmortem data for use on an. Injury-related mortality in south africa: a retrospective descriptive study of postmortem investigations richard matzopoulos a, megan prinsloo a, victoria pillay-van wyk a, nomonde gwebushe b, shanaaz mathews c, lorna j martin d, ria laubscher b, naeemah abrahams c, william msemburi a, carl lombard b & debbie bradshaw a. Patricia cornwell (born patricia carroll daniels there are two remarkable style shifts in the scarpetta postmortem, body of evidence, all that remains. Industry guide for beef aging 1 industry guide for two days after harvest (two days postmortem), ten select 666 25 147 276 491 658 866 1000. Learning from projects: describes how firms can learn from projects through postmortem analysis a themed collection containing two or more items at a special. Augmenting experience reports with lightweight postmortem reviews projects we will look at two different methods for capturing than 100 pages. Even more ambiguous are the community profile differences of the postmortem only two known labs in the united research projects to identify and. Charlie sheen, actor: two and a half men show all 100 episodes 1998/i postmortem james mcgregor (as charles sheen. The path post mortem the path the two of us radically redirect game connection is an event where developers can present their projects to publishers through. Stiff pose victorian postmortem photography the flash of a 100+ year flash the two posing stands appear to have had the neck supporters removed. Projects selected for postmortem review short 6 100 large 1y short 7 evaluation by the customer in two projects. Postmortem of two projects and 100 On wednesday 19th july, 2017 a bug found in the multi-signature wallet (multi-sig) code used as part of parity wallet software was exploited by parties. Home / commentary / 2017 fantasy postmortem: denver broncos with 397 total yards and two touchdowns over weeks 1 booker projects. - Postmortem redistribution of two antipsychotic drugs, haloperidol and thioridazine, in the rat. - Read this essay on project postmortem are more than 100 animal species on the endangered who selected the same vendor for two different projects. - A good postmortem analysis, as all monitoring console projects shared two main used about 1,000 cpus and 8 tb of ram to index 100 tb of data every day. - Postmortem audit review of projects postmortem audit review of projects postmortem audit review of projects postmortem is guaranteed to be 100. - Release 51 postmortem telethon – mar 2, 2018 nearly all of the kickstarter-type projects i’ve backed have box damage due to +100 been asking for this for. Two sres don't necessarily produce better results and a 100 request/second service can turn into the most effective phrasing for a postmortem is to. The project postmortem: an essential tool for the a month or two later on longer projects the postmortem gets written exclusively by two or three top. Beth works for a company where she is involved in a lot of group projects because he did not finish step two by project post-mortem: report & questions. Tivities which can be organized for projects either from under 10 to more than 100 pages of postmortem two techniques are used in both types of. The postmortem is in for inhaled insulin being a viable product in the market place after pfizer wrote off their multi-billions in their investment for the first inhaled product, exubera, we now have the second such product that has performed worse than the short time exubera was available for patient’s use. The trials and tribulations of two guys who made a successful mobile two guys made an mmo: the growtopia postmortem mysql based projects and websites.
OPCFW_CODE
RLC Q factor measurement I have a simple parallel RLC circuit (R in series to L and C which are in parallel to each other). I am using a 0.01 micro farad CK05 capacitor a 100k resistor and a 10mH inductor. I measure a resonance frequency of 17.1kHz which is close to the theoretical resonance frequency of 15.9kHz. My problem is in the bandwidth. I am measuring a bandwidth of approximately 1.5kHz giving a Q factor of approximately 11.4 while the theoretical value is bigger by an order of magnitude i.e. 10 times larger (using \$ Q=\omega_0RC \$ for parallel RLC circuit I calculated Q=~100). I measured the bandwidth by varying the frequency and measuring the frequencies where \$V=\frac{\sqrt{2}}{2}V_{max} \$ (using the cursors and measurement options on a digital scope). What may be the reason for this kind of error? below I added a picture of the components I used, from left to right: resistor capacitor and inductor. I used an ohm meter and measured the DC resistance of the inductor to be 66 ohm. What is the resistance of the inductor you are using? Are you referring to the ESR? I have no knowledge of that and I do not know how to experimentally measure it. But I'm always open to learning new things :) No, I mean the DC resistance. You can measure it with an Ohm meter. I'm no longer in the lab so I'll check that tomorrow and update my question. But anyway, how exactly will this affect the bandwidth of the circuit? I measured it as suggested and found it to be 66 ohms Some people here saw were I was going with this and gave the answer. I was hoping you will figure out yoursel that Q=Xl/R. So the Q of your iductor is 16 or less. You should put this resistance in series with you inductor and recalculate your Q. The resistance of this inductor will actually be a bit higher because of skin effect. So do you understand why you Q is not 100 now? The Q factor for any tuned LC circuit cannot be higher than the Q factor of the inductor itself and, if the inductor's series resistance is high enough, it will dominate the Q factor of the tuned circuit. So, given that you have witnessed a Q of 11.9 with an inductive reactance of 1049 Ω at 16.7 kHz, the ESR of the inductor must be about 88 Ω. Now this may not directly translate to the measured DC resistance because of skin effect, proximity effect and possibly core losses but I would expect that the inductor you have chosen is a few tens of ohms in DC resistance. From wiki, a series RLC tuned circuit has a Q of: - \$\dfrac{1}{R}\sqrt{\dfrac{L}{C}}\$ and, if you rearrange the formula when the circuit is at resonance you get: - Q = \$\dfrac{\omega_0L}{R}\$ and this is the same as the Q factor for an inductor. What is the relevance of this you might ask? It's relevant because if you throw away the 100 k feed resistor you are left with L and C in series with the AC resistance of the coil; in other words, the best Q you can achieve is defined by the Q of a series tuned circuit driven from a perfect voltage source. can you direct me to a proof showing that a tuned circuit's Q factor cannot be higher than the Q factor of the inductor It uses? Look up the Q factor for a series tuned RLC circuit because when you remove the 100 kohm driving resistor, in effect this is what you have with the inductor's series resistor. Also, can you confirm what inductor you used? I added a picture of the inductor I used and I measured it's DC resistance using an ohm meter. it was a few tens of volts as you predicted, more specifically 66ohms. You have your answer then. Did you look up the Q factor for a series RLC? I'll add some more thoughts to my answer... I read some about series RLC from a file i found online. I eagerly await your updated answered :) I've amended my answer to state the series tuned circuit Q and how it applies to a parallel tuned circuit with coil resistance. 10mH chokes in the lab will probably be low Q<<100 due to high DCR >>10 Ohms In order to make a Q = 100 filter you need an inductor with a higher Q than the filter. I could be wrong but I believe the series Rloss transforms into a Norton equiv circuit Rshunt loss according to the Q? Try a smaller choke with high Q and bigger cap. Q=100 is difficult to obtain in small ferrite chokes due to fine wire turns. Q factor can "only" be measured by a Q-meter. Basically, the schematic is the following. Board must be carefully assembled. The injection system for good behavior (and measuring real parameters) is the very important specification. One must be also aware that the Q (and L) parameter vary with frequency ! So, one must measure Q at the frequency where inductance is used. All parasitics (capacitors ... unless perhaps missing some ...) are presented and can be evaluated for starting to 1pF. They all influence (sometimes greatly) the Q measured. Pads are used for adding externally components. Here also a example of results (mix stepping Caux -10pF to 10pF and series resistor inductor 1,3,5 Ohm = R4 ) Vertical axis is in dB.
STACK_EXCHANGE
Feature Request: Clean-Embeds: Hide embedded chapter's name Feature Requested Hello, I use the clean-embeds-css quite a lot. Basically everywhere. It is amazing, and something I am baffled hasn't made it into base obsidian yet. However, it has its flaws. I often have rather large documents when it comes to more complex topics, or when I track my own thoughts on a subject over the course of several months. In my workflow that results in a lot of specific chapters; and sometimes I want to embed some specific chapters elsewhere instead of copy-pasting it again. After all, that's the point of embeds. My issue is that embedded chapters don't get shown in the outline of a note - which I often require. The only solution I have found is to replicate the chapter name immediately before the embed. That however, results in the rather annoying visual seen in image 1. Currently, I don't really have a workaround to hotfix this issue. If possible, I would suggest either of the following two solutions, although there might be better ones: set up an additional css-class which will trim the first header encountered in an embed. This would make cssclass: clean-embeds work as it does currently, and the additional class could be leveraged if you wanted to show the chapter's title within the outline. set up a globally active style setting manipulating the behaviour of cssclass; clean-embeds to do the same. While probably easier you would be restricted to either one of both options - and while I personally wouldn't mind I would assume there are users who would prefer the current behaviour over what I am requesting. After all, each method has its advantages and disadvantages. The (at face value) best solution would be if obsidian would natively insert the chapter you embed (and its subchapters) into the note outline automatically, but even just thinking about it for a few seconds makes it complicated. Would header levels be adjusted to be children of the current level, or would they disrupt the chapters's flow and remain static? As I believe it is unlikely to be implemented at all, or soon, I am instead turning here to see if a workaround via a css-class is possible. I also don't believe css-snippets to be the solution if you want granularity. To my knowledge, css-snippets are active globally, which means you'd have to manually de-/activate the snippet every time you want a specific behaviour; which is rather cumbersome compared to f.e. simply changing the name of an embedded cssclass. Thank you. Sincerely, ~Gw Relevant Screenshot Image 1 Relevance [X] The feature would be useful to more users than just me. Style Settings [X] I have checked that the feature is not already available via the Style Settings plugin. Hi! As you may have heard, the upcoming Obsidian update 0.16 has a massive amount of changes for theme development, necessitating a complete rewrite of Shimmering Focus. This means: A lot of bugs and feature requests will be obsolete. with Obsidian 0.16 and the rewritten Shimmering Focus. As the theme has over 15,000 lines of CSS, the rewriting will take a considerable amount of time. Recreating the core features alone will likely take me a while, so I simply will not have much time to implement many feature requests, since the recreation of the past core features has priority. Basically, this is the reason why I am – for now – closing this issue. If your bug still exists / your feature request is still relevant in the upcoming version 3.0 of Shimmering Focus, please comment here and I will re-open it. 🙂 set up a globally active style setting manipulating the behaviour of cssclass; clean-embeds to do the same. Has been added to the rewritten Shimmering Focus, and therefore will be available when the rewrite is done
GITHUB_ARCHIVE
1. Don’t organize while capturing, and don’t organize for the sake of organizing. 2. PARA is a flexible system for organizing notes and files and taking action on them. It stands for Projects, Areas, Resources, Archives, going from the most actionable (Projects) to the least (Archives). 3. The simplest way to organize notes in many apps is using [[wiki links]]. When I started building my second brain, I made the mistake of organizing for the sake of organizing. Maybe it was because I built my first system with Notion, but I’d spent countless hours shaping databases and tables — only to never look at them again. But with the power of the CODE framework and Logseq, I’ve been able to consume more information and do something useful with it. Here’s how I stopped hoarding notes and started taking action: Use a lightweight organizational system that helps you act It’s impossible to only use one tool as knowledge workers. From cloud drives to electronic devices, our notes and files are scattered over different digital and physical places. But, it is possible to have a single system that works across all platforms. One such system is Tiago Forte’s PARA, which perfectly fits in the organize step of the CODE framework. PARA stands for Projects, Areas, Resources, Archives. Basically, it’s a way to categorize information by actionability, going from the most actionable (Projects) to the least (Archives): If you have time, I recommend you read the article in the “Further learning” section. It’s about how the PARA method can help you become more organized without wasting time. But the system is easy. The only question you have to ask when organizing a note is: where do I want to see it next? Let’s see how I do that: How I organize in Logseq Yesterday, I showed how I quickly capture ideas on Logseq’s journals page and review them during my weekly review. But what do I do during that review? I scroll back in my timeline to see what I captured. I then make so-called [[wiki links]] out of topics I’m interested in and want to dive deeper into. Many apps use this format for linking to other notes. Here’s an annotated example from my Logseq graph: I planned to read, saved highlights directly underneath the task, and moved them over to the book’s page during my weekly review (see 1, this is where I’ve referenced this part of my notes on two separate journals pages). What’s important to note here is that capturing and organizing are two separate steps; I never organize while capturing ideas. On the book page, I have some metadata (2) that I can use to create table views. As you can see, I treat books as Projects when I want to wrangle all ideas out. But this metadata isn’t crucial. Because I’ve added these wiki links (3), I can easily find my way back to these notes from many places in my notes collection (see 4, graph view). This means it doesn’t matter where I store my notes; thanks to these “bi-directional” links, I can easily dip back into my notes. It also eliminates the need for a proper organizational system (well, mostly). Tomorrow, I’ll get deeper into how I distill my notes to see what they’re about at a glance. That way, (re)using ideas becomes a breeze. Your turn: When do you organize? Most of us have no problem capturing ideas. But it’s the organization step we often get stuck. I got out of it by finding a process that works for me. That’s planning what I read, saving stuff by simply copy-pasting, and then organizing quickly by wrapping some [[brackets]] around topics I’m interested in. But that’s me. So, let’s learn from each other by answering this question: When do you organize your notes? Please take a few minutes to think about your current process for organizing notes and the pros and cons of doing it this way. And if you never organize, reflect for a moment why that might be. I hope to read from you! The PARA Method: The Simple System for Organizing Your Digital Life in Seconds (8 minutes reading time) If your digital storage systems and notes apps are chaos (like mine were for many years), I recommend you look at the PARA method. It’s as easy as creating four folders in each app you use regularly and asking yourself one question when organizing information. But if you need more context, this article by Tiago Forte is a great introduction.
OPCFW_CODE
At current, Android dominates the worldwide smartphone operating system market. I love to take challenges and contributing my skills expertise to a company to help it prosper and on the identical time, develop as a professional iOS Developer. There was the creator who began a undertaking on account of a close to-demise experience. Throughout development, the game designer implements and modifies the game design to replicate the present imaginative and prescient of the sport. Have the opportunity to alter the games industry by making games for brand spanking new markets and never simply replicating what’s already on the market. Getting your self used to object-oriented programming would easily get you began. IGDA Chicago enriches their sport improvement neighborhood via civic engagement and occasions that facilitate networking and skilled development. You will be discovering the design of aggressive video games and simulations through the use of gaming design elements along with educational plan, and studying theory. Programming skills will not be required but helpful. They often finance the event, generally by paying a online game developer (the writer calls this external development) and typically by paying an inner staff of developers called a studio. With 4+ years expertise in recreation improvement and 10+ experience in programming, I made many cellular games up to now and considering for more. Video Game Development is likely one of the most evolving and ever-rising trade. As a result of the publisher usually finances growth, it usually tries to handle development danger with a staff of producers or mission managers to observe the progress of the developer, critique ongoing growth, and assist as needed. Mobile And HTML5 Game Development? We Acquired Answers! Turning yourself into an expert Unity-licensed recreation developer is the dream of every particular person with curiosity in recreation development. Presumably you might be enthusiastic about starting your individual gaming enterprise, an Impartial shop, a budding recreation development ace set to rake in reputation and riches. Once you enroll for a Certificates, you’ll have entry to all videos, quizzes, and programming assignments (if applicable). Discovering good artwork for a recreation was an expensive and time-consuming task. At Academy of Art College, you’ll obtain a effectively-rounded education within the arts and sciences, with courses that cover methods in recreation design, sport programming, idea art, 3D modeling, and animation, amongst others.anonymous,uncategorized,misc,general,other Making Your First Recreation 102 103 There are usually one to a number of lead programmers , 104 who implement the game’s starting codebase and overview future development and programmer allocation on individual modules. game development life cycle wiki, download drama korea game development girl, game development process outline, download drakor game development girl sub indo, game development girl download Sport Developer Programs And Online Training Cellular sport improvement is at the moment experiencing an outstanding rise, especially with the rising recognition of smartphones and tablets. There’s an infinite quantity of video game-related content material out there, and there are one million legitimate ways to debate and luxuriate in video video games. You may also be taught to program collision detection and different physics necessary to create sensible weapon features. When this does happen, most builders and publishers quickly launch patches that fix the bugs and make the game absolutely playable once more. Engine Development – This course emphasizes debugging, improvement of problem fixing skills, studying and understanding pre-written code, necessities evaluation, and working towards being a greater software program engineer upon commencement. Google Buys Sport Developer Typhoon Studios TechCrunch Arising as a prominent branch of sport improvement within the Nineteen Seventies after the massive success of arcade video video games, game designers as we know them immediately had been tasked with designing the bulk of content for the game, including the principles, storyline, characters and overall attraction. The requirement for video and pc games in the market has grown, and so the demand for expert recreation designers is prone to increase. Use Upwork to chat or video name, share information, and track mission milestones from your desktop or cellular. We’re a worldwide network of collaborative communities and people from all fields of game development, including programmers and producers, designers and artists, writers, businesspeople, QA team members, localization specialists, and everyone else who participates in the recreation development process. game development processes, game development life cycle methodology, game development life cycle guidelines How about these games which train English for you? Plenty of sport developers are heavy on the programming facet of things. Not like different widespread examples, results of the Unity Licensed Developer Examination, after its completion, are out and are displayed in accordance with the topic space. Our amenities, college, and palms-on learning method are designed to give you the instruments it is advisable succeed in this thrilling business.
OPCFW_CODE
This topic is designed to help you plan to protect your Forefront TMG network against common attacks and Domain Name System (DNS) attacks. It describes: Detection of common attacks Common attacks include the following: - Windows out-of-band (WinNuke) attack—An attacker launches an out-of-band denial-of-service (DoS) attack against a host protected by Forefront TMG. If the attack is successful, it causes the computer to fail or a loss of network connectivity on vulnerable computers. - Land attack—An attacker sends a TCP SYN packet with a spoofed source IP address that matches the IP address of the targeted computer, and with a port number that is allowed by the Forefront TMG policy rules, so that the targeted computer tries to establish a TCP session with itself. If the attack is successful, some TCP implementations could go into a loop causing the computer to fail. - Ping of death—An attacker attaches a large amount of information, that exceeds the maximum IP packet size, to an Internet Control Message Protocol (ICMP) echo (ping) request. If the attack is successful, a kernel buffer overflows, causing the computer to fail. - IP half scan—An attacker repeatedly attempts to connect to a targeted computer, but does not send ACK packets in response to SYN/ACK packets. During a normal TCP connection, the source initiates the connection by sending a SYN packet to a port on the destination system. If a service is listening on that port, the service responds with a SYN/ACK packet. The client that initiates the connection then responds with an ACK packet, and the connection is established. If the destination host is not waiting for a connection on the specified port, it responds with an RST packet. Most system logs do not log completed connections until the final ACK packet is received from the source. Sending other types of packets that do not follow this sequence can elicit useful responses from the target host, without causing a connection to be - UDP bomb—An attacker attempts to send a User Datagram Protocol (UDP) datagram, with illegal values in certain fields, which could cause some older operating systems to fail when the datagram is received. By default, no alert is configured for this type of attack. - Port scan—An attacker attempts to count the services that are running on a computer by probing each port for a response. You can specify the number of ports that can be scanned before an event is generated. When Forefront TMG intrusion detection is enabled and offending packets are detected, they are dropped, and an event that triggers an Intrusion Detected alert is generated. By default, the Intrusion Detected alert is reset automatically after one minute, during which time Forefront TMG continues to block offending packets but without issuing an alert. You can configure this alert to send you an e-mail notification when it is triggered. You can also enable logging of the dropped packets. The name of each type of detected attack corresponds to an additional condition in the definition of the Intrusion Detected event. For each additional condition (type of attack), you can define and enable an alert which specifies the actions to be taken in response to the event, and is issued by the Microsoft Firewall service, when all the conditions specified in the alert are met. The actions that can be triggered by an alert include: sending an e-mail message, invoking a command, writing to a log, and starting or stopping Forefront TMG services. Detection of DNS attacks The Forefront TMG Domain Name System (DNS) filter intercepts and analyzes all inbound DNS traffic that is destined for the internal network, and other protected networks. If DNS attack detection is enabled, you can specify that the DNS filter checks for the following types of suspicious activity: - DNS host name overflow—When a DNS response for a host name exceeds 255 bytes, applications that do not check host name length may overflow internal buffers when copying this host name, allowing a remote attacker to execute arbitrary commands on a targeted computer. - DNS length overflow—When a DNS response for an IP address exceeds 4 bytes, some applications executing DNS lookups will overflow internal buffers, allowing a remote attacker to execute arbitrary commands on a targeted computer. Forefront TMG also checks that the value of RDLength does not exceed the size of the rest of the DNS response. - DNS zone transfer—A client system uses a DNS client application to transfer zones from an internal DNS When offending packets are detected, they are dropped, and an event that triggers a DNS Intrusion alert is generated. You can configure the alerts to notify you that an attack was detected. When the DNS Intrusion event is generated five times during one second for DNS zone transfer, a DNS Zone Transfer Intrusion alert is triggered. By default, after the applicable predefined alerts are triggered, they are not triggered again until they are reset manually.
OPCFW_CODE
// Black box testing, that's why we have mbpqs_test as package instead of mbpqs. // This way, we know that accessibility of the api functions alone provides enough to fully function. package mbpqs_test import ( "fmt" "math/rand" "testing" "github.com/Breus/mbpqs" ) // This test adds and verifies multiple channels, // consequently signs and verifies multiple messages in each channel, // grows the channel and signs/verifies new messages. func TestMultiChannels(t *testing.T) { // Generate parameterized keypair. var rootH uint32 = 2 var chanH uint32 = 10 var c uint16 = 0 var w uint16 = 4 var n uint32 = 32 sk, pk, err := mbpqs.GenKeyPair(n, rootH, chanH, c, w) if err != nil { t.Fatalf("KeyGen failed: %s\n", err) } // Add 2^rootH channels for testing. for i := 0; i < (1 << rootH); i++ { chIdx, rtSig, err := sk.AddChannel() fmt.Printf("Added channel with ID: %d\n", chIdx) if err != nil { t.Fatalf("Adding %d-th channel failed with error %s\n", chIdx, err) } fmt.Printf("Created channel %d\n", chIdx) acceptChannel, err := pk.VerifyChannel(rtSig) if err != nil { t.Fatalf("Channel verification failed: %s\n", err) } if !acceptChannel { t.Fatal("Channel verification not accepted") } // Set the authnode to the root of the first blocks tree. authNode := rtSig.GetSignedRoot() // Now, we sign 2^chanH times, and verify the signatures in each channel. for j := 0; j < int(chanH)-1; j++ { msg := []byte("Message" + string(j)) sig, err := sk.SignChannelMsg(chIdx, msg) if err != nil { t.Fatalf("Message signing in channel %d failed with error %s\n", chIdx, err) } fmt.Printf("Signed message %d in channel %d\n", j, chIdx) acceptSig, err := pk.VerifyMsg(sig, msg, authNode) if err != nil { t.Fatalf("Verification message %d in channel %d failed with error %s\n", j, i, err) } if !acceptSig { t.Fatalf("Verification of correct message/sig not accepted for message %d in channel %d\n", j, i) } else { fmt.Printf("Correctly verified message %d in channel %d\n", j, chIdx) } authNode = sig.NextAuthNode() } // Let's grow the channels! gs, err := sk.GrowChannel(chIdx) if err != nil { t.Fatalf("Growing channel %d failed with error %s\n", chIdx, err) } // Let's verifiy the growth signature. acceptGrowth, err := pk.VerifyGrow(gs, authNode) if err != nil { t.Fatalf("Verification of growth channel %d failed with error: %s\n", chIdx, err) } if !acceptGrowth { t.Fatalf("Correct growth of channel %d not accepted", chIdx) } authNode = gs.NextAuthNode() // We have new keys to sign, lets use them! for h := 0; h < int(chanH-1); h++ { msg := []byte("Message after growth" + string(h)) sig, err := sk.SignChannelMsg(chIdx, msg) if err != nil { t.Fatalf("Message signing in channel %d failed with error %s\n", chIdx, err) } fmt.Printf("Signed message %d in channel %d\n", h, chIdx) acceptSig, err := pk.VerifyMsg(sig, msg, authNode) if err != nil { t.Fatalf("Verification message %d in channel %d failed with error %s\n", h, i, err) } if !acceptSig { t.Fatalf("Verification of correct message/sig not accepted for message %d in channel %d\n", h, i) } else { fmt.Printf("Correctly verified message %d in channel %d\n", h, chIdx) } authNode = sig.NextAuthNode() } } } // Multichain mimick for testing purposes. type Multichain struct { channels []Blockchain } // Blockchain mimick for testing purposes. type Blockchain struct { blocks []mbpqs.Signature } // TestSignStoreVerify signs multiple messages in multiple channels. // Subsequently, the signatures are stored on the 'blockchain'. // Then, we test if a verifier can indeed verify the signatures in the // channel it has access to. func TestSignStoreVerify(t *testing.T) { var nrChains int = 1 // Make a multichain with 'nrChains' blockchains. mc := Multichain{ channels: make([]Blockchain, nrChains), } // Generate parameterized keypair. var rootH uint32 = 2 var chanH uint32 = 5 var c uint16 = 1 var w uint16 = 4 var n uint32 = 32 var gf uint32 = 0 //sk, pk, err := mbpqs.GenKeyPair(n, rootH, chanH, c, w) sk, pk, err := mbpqs.GenerateKeyPair(mbpqs.InitParam(n, rootH, chanH, gf, c, w), 1) if err != nil { t.Fatalf("KeyGen failed: %s\n", err) } // SIGN + STORE ON "BLOCKCHAIN" // Add to each channel a keychannel. for i := 0; i < nrChains; i++ { chIdx, rtSig, err := sk.AddChannel() if err != nil { t.Fatalf("Addition of channel %d failed with error %s\n", chIdx, err) } // Add the rootSig to the blocks. mc.channels[i].blocks = append(mc.channels[i].blocks, rtSig) // Lets sign chanH-1 messages in each channel and add it to its respective blocks. for j := 0; j < int(chanH-1); j++ { msg := []byte("Message in channel" + string(chIdx)) msgSig, err := sk.SignMsg(chIdx, msg) if err != nil { t.Fatalf("Signing message %d in channel %d failed with error %s\n", j, chIdx, err) } mc.channels[i].blocks = append(mc.channels[i].blocks, msgSig) } // Lets also test a growsignature. growSig, err := sk.GrowChannel(chIdx) if err != nil { t.Fatalf("Growing channel %d failed with error %s\n", chIdx, err) } mc.channels[i].blocks = append(mc.channels[i].blocks, growSig) // Lets add a few more message siganture to test. for k := 0; k < int(chanH-1); k++ { msg := []byte("Message in channel" + string(chIdx)) msgSig, err := sk.SignMsg(chIdx, msg) if err != nil { t.Fatalf("Signing message %d in channel %d failed with error %s\n", k, chIdx, err) } mc.channels[i].blocks = append(mc.channels[i].blocks, msgSig) } } // VERIFY FROM "BLOCKCHAIN" // Verify the rootSignature for each channel. for i := 0; i < nrChains; i++ { // Counter to count correct signature verifications for this channel. var counter int // Retrieve the current channel in the multichain curChan := mc.channels[i] var nextAuthNode []byte // Lets verify the signatures in the channel. for j := 0; j < int(len(curChan.blocks)); j++ { // Current Signature block curSig := curChan.blocks[j] curMsg := []byte("Message in channel" + string(i)) acceptMsg, err := pk.Verify(curSig, curMsg, nextAuthNode) if err != nil { t.Fatalf("Message verification in channel %d failed with error %s", i+1, err) } if !acceptMsg { t.Fatalf("Verification of correct message %d on chain %d not accepted", j, i) } else { counter++ } nextAuthNode = curSig.NextAuthNode(nextAuthNode) } if counter != len(curChan.blocks) { t.Fatal("Not enough signatures are correctly verified") } if counter != int(2*(chanH-1)+2) { t.Fatal("Not enough signatures verified correctly") } } } func TestVerifyMsg(t *testing.T) { sigs := 1000 for H := 1; H < 2; H++ { p := mbpqs.InitParam(32, uint32(H), 1001, 0, 1, 4) sk, pk, err := mbpqs.GenerateKeyPair(p, 1) if err != nil { t.Fatal("Generating key pair failed with error: ", err) } chIdx, RtSig, err := sk.AddChannel() if err != nil { t.Fatal("Adding channel failed with error: ", err) } msg := make([]byte, 512000) var sigChain []mbpqs.Signature authNode := RtSig.NextAuthNode() for j := 0; j < sigs; j++ { rand.Seed(int64(j)) rand.Read(msg) sig, err := sk.SignMsg(chIdx, msg) if err != nil { t.Fatal("message signing failed with error:", err) } sigChain = append(sigChain, sig) accept, err := pk.VerifyMsg(sigChain[j].(*mbpqs.MsgSignature), msg, authNode) if err != nil { t.Fatal("Message verification failed with error:", err) } if !accept { t.Fatal("Correct signature not verified") } authNode = sigChain[j].NextAuthNode(authNode) } } }
STACK_EDU
Presence of FileField breaks FunctionalTest::submitForm Description Affects at least framework 4.5.3 & cms 4.5.1 - but I imagine the issue affects every version. Tested with CWP release 2.5.2 that uses core release 4.5.2. Have form with dropdown fields End to end test form for business rules around submission data using FunctionalTest::submitForm add new file field from silverstripe/assets to accomodate new requirement every test now fails Steps to Reproduce Tested via composer create-project silverstripe/recipe-cms formtesttest 4.5.2 The form must have both DropdownField and FileField present. Other fields appear to be irrelevant. Each field separately sees the tests pass. Note no field has any custom validation, nor does the form. Clarification: the issue exists only in automated testing envrionment (Siverstripe's phpunit test harness). Manual reproduction attempts (font-end user testing) see everything work as expected. <?php namespace { use SilverStripe\Forms\DropdownField; use SilverStripe\Forms\FieldList; use SilverStripe\Forms\FileField; use SilverStripe\Forms\Form; use SilverStripe\Forms\FormAction; use SilverStripe\View\SSViewer; use SilverStripe\CMS\Controllers\ContentController; class PageController extends ContentController { private static $allowed_actions = ['Form', 'FormPass']; public function Form() { return Form::create( $this, __FUNCTION__, FieldList::create( DropdownField::create('Breaks', null, [ 'one' => 'First option', 'two' => 'Second option', ]), FileField::create('uploaded') ), FieldList::create( FormAction::create('formSubmission', 'Submit') ) ); } public function FormPass() { $form = $this->Form()->setName(__FUNCTION__); $form->Fields()->removeByName('uploaded'); return $form; } public function formSubmission() { return "Neat."; } public function getViewer($action) { return SSViewer::fromString( '<!DOCTYPE html>' . PHP_EOL . '<html><head><title>Form upload test</title></head><body>$Form $FormPass</body></html>' ); } } } <?php use SilverStripe\Control\HTTPRequest; use SilverStripe\Dev\FunctionalTest; use SilverStripe\Dev\SapphireTest; class PageControllerTest extends FunctionalTest { protected $usesDatabase = true; protected function setUp() { parent::setUp(); $page = Page::create(); $page->update(['URLSegment' => 'home'])->write(); $page->doPublish(); } /** * @dataProvider getPassFail */ public function testForm($formName) { $this->get('/'); $response = $this->submitForm("Form_$formName", 'action_formSubmission', ['Breaks' => 'two']); $this->assertEquals('Neat.', $response->getBody()); } public function getPassFail() { return [ ['FormPass'], ['Form'], ]; } } output PHPUnit 5.7.27 by Sebastian Bergmann and contributors. .F 2 / 2 (100%) Time: 3.25 seconds, Memory: 44.50MB There was 1 failure: 1) PageControllerTest::testForm with data set #1 ('Form') Failed asserting that two strings are equal. --- Expected +++ Actual @@ @@ -'Neat.' +'<!DOCTYPE html> +<html><head><title>Form upload test</title></head><body> [...] FAILURES! Tests: 2, Assertions: 2, Failures: 1. Presence of an input type=file causes SimpleTest to (correctly) encode a submission as multi part form data, as opposed to a simple GET style URL Query string. TestSession though is set up to then run this through parse_str which is designed for query string parsing only. https://github.com/silverstripe/silverstripe-framework/blob/d408a4e714d4953df9f8552f2f3536223b799b81/src/Dev/TestSession.php#L238 I.e. this part of testing designed specifically for POST vars can only handle GET vars. Without a file field the submission data becomes Breaks=two&action_formSubmission=Submit With a file field the submission data becomes: --st5f0ba947b4fc5 Content-Disposition: form-data; name="Breaks" two --st5f0ba947b4fc5 Content-Disposition: form-data; name="MAX_FILE_SIZE" 2097152 --st5f0ba947b4fc5 Content-Disposition: form-data; name="action_formSubmission" Submit --st5f0ba947b4fc5-- parse_str decodes this into: array(1) { '--st5f0ba947b4fc5 Content-Disposition:_form-data;_name' => string(213) "[... rest of the POST body] Note: array size 1 spaces turned to underscores in array key the value of the only key is the entire remainder of the submission body https://www.php.net/manual/en/function.parse-str.php#refsect1-function.parse-str-examples Because variables in PHP can't have dots and spaces in their names, those are converted to underscores. Same applies to naming of respective key names in case of using this function with result parameter. This effectively wipes all submitted data clean, thus causing a dropdown to have an empty submission (which is invalid). This would in theory (I've not tested this) also mean any required field would also fail if a validator was applied - or in the very least would cause an assertion that something happens with the submitted data to fail, in that there is no submitted data. There does not appear to be an easy way out of this. However, this third party library designed for use with PHP4 that has been folded into framework due to the lack of composer at the time (when Silverstripe shifted from SVN to git) is actually actively maintained and is compatible with PHP7 - and should probably be dropped from framework. This may solve the issue. It may not. This is more work than I can spend investigating now. https://github.com/simpletest/simpletest/ After a bit of reading, the best I can find to deal with this is this gist from the comments of this StackOverflow post. Then we would just need a way to work out whether to call parse_str or parse_raw_http_request
GITHUB_ARCHIVE
Connect API to an organization from organization profile An organization manager needs to be able to connect APIs to the manager's organization. The manager can only do this if he/she also an API manager. The organization profile needs to contain an option to Connect APIs to an organization and that option is only visible if the user is both manager of current organization and an API owner. Definition of done [ ] An organization manager can see a Connect APIs to an organization option on the organization profile, if the current user also has the role of API owner [ ] The available choices for APIs to connect to are API(s) the current user manages [ ] If the user selects an API and saves the changes, a connection between API and organization is made Wireframes from Settings tab, organization manager can select from a dropdown APIs that he manage to connect to his organization ![altorganizationprofilesettings] (alt. design suggestion: a small button in the API view in Organization profile to connect APIs to the organization) On selecting and saving an API, an S alert appears confirming the Organization Manager that his/her API is now connected to the organization and can be viewed from the Organization profile @marla-singer Could you take this task? It is related to #158 and should largely share template and code with the other task @bajiat yes. Estimeted it in waffle @marla-singer @bajiat let me know your feedback about the wireframes @Nazarah Why do we have text "Suggest an API" but no "Connect API" or "Add API"? @Nazarah I like the alt. design suggestion with a small button in the API view. What do you think @bajiat ? @marla-singer @nazarah I also like the alt design, feels like its simple to add APIs to an organization. @bajiat My undesrtading is user can add API if user is manager of current organization, user is manager of selected API and API doesn't connected to another one organization. What about the third condition? @bajiat Also can administrator connect (suggest) APIs to organization? And which apis does admin connect to : any apis or just own apis? @marla-singer : sorry for the label mishap. It should be "Connect an API". If an user is an organization manager AND s/he has her own APIs, then s/he would be able to connect only those APIs to organization. So on clicking the dropmenu, only those APIs should appear to which s/he has managing rights. Let me know if I could explain this to you. I am ok with the Alternate suggestion as well. Just make sure you add the help text and the dropmenu with the logic described below in the Dom that appears on clicking the button. @marla-singer: Admins can do anything. No restrictions about managed organization or owned APIs. 2016-12-20 15:29 GMT+02:00 Nazia Hasan<EMAIL_ADDRESS> @marla-singer https://github.com/marla-singer : sorry for the label mishap. It should be "Connect an API". If an user is an organization manager AND s/he has her own APIs, then s/he would be able to connect only those APIs to organization. So on clicking the dropmenu, only those APIs should appear to which s/he has managing rights. Let me know if I could explain this to you. I am ok with the Alternate suggestion as well. Just make sure you add the help text and the dropmenu with the logic described below in the Dom that appears on clicking the button. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Digipalvelutehdas/APIKA/issues/159#issuecomment-268243075, or mute the thread https://github.com/notifications/unsubscribe-auth/ALFnlL1Rl0r00dPo9-7FvfIa5sblYMhiks5rJ9hNgaJpZM4LPAWa . -- Taija Björklund p. 040 5177 969 <EMAIL_ADDRESS>www.linkedin.com/in/taijabjorklund @marla-singer "What about the third condition?" Can you clarify what was the question about? "What about the third condition?" Can you clarify what was the question about? @bajiat I meant we have three conditions for APIs that can be connected to organization by current user: User must be manager for this organization User can add APIs that were managed by him Usen can add APIs that don't have connection to other organization (I told about this condition) Usen can add APIs that don't have connection to other organization Should I have in mind this point or User can add managed APIs that both don't have connect to organization yet and have already connect to any organization? @Nazarah Now it looks like this Button on Organization profile After clicking these is shown form @marla-singer Thanks for the clarification. Each API should be connected to one organization only, so you're right about a third condition. The API can be connected only, if it is not connected to any organization yet. @marla-singer : great work Dasha. :) @marla-singer Would you be interested in this? The data model has changed and there is a reusable form. @brylie can explain current status. Please estimate the task, if you are interested. @bajiat yes, I'll take it. A rough estimate is 5 days Thanks @marla-singer !
GITHUB_ARCHIVE
The film itself is one of Pixar's best. It certainly shows a new found maturity to their filmmaking, bringing my wife and (presumably) a good proportion of the rest of the audience to tears. I, on the other hand, am a Northern English bloke hardened by a childhood working 28 hours a day down the mines, bricks for breakfast, living in a puddle, etc - therefore no film has ever had that effect on me (either that or I'm just emotionally stunted). The fact that a frankly surreal story idea works so well is testament to Pixar's excellent storytelling craft. As for the 3D effect - I'm not completely convinced it's anything other than a short lived gimmick. It certainly works - the form of objects is realised surprisingly well and the circular (as opposed to linear) polarisation seems to negate the headache I was fearing. However, I kept finding myself distracted by the 3D effect and wasn't able to completely absorb myself in the film. Perhaps this impression will fade if 3D cinema becomes more commonplace. Another problem which the film mostly avoided is a perceived lag that happens when there is a camera cut. Pixar seemed to be vary careful to keep the amount of parallax on the focus of the image roughly constant between cuts, but the editing on one of the trailers beforehand (some 3D CGI space thing) was jarring. Far too many fast cuts causing a noticeable delay whilst my eyes locked onto the new parallax. Maybe younger viewers are able to keep up better, but I'm a 30 year old boy - surely my eye muscles are still good! Reading this post back, I've noticed that I haven't (yet) mentioned the graphics in the film, despite being a graphics geek and indeed a graphics programmer. Suffice to say they're so good you barely notice them - the few times I did think about it I saw flawless lighting, shading, the works. Sometimes I envy film effects people in that they have a lot more processing time at hand as opposed to games aiming to have everything rendered within 16⅔ or 33⅓ milliseconds depending on whether we're aiming for 60 or 30 frames per second. Another technical oddity I noticed is that at the end of the credits (I hung around in case there was any extra bits at the end), there was a message saying that all final rendering had been done on Intel processors. I'm mildly surprised that Pixar aren't using any GPU technology such as NVIDIA's CUDA or OpenCL to accelerate things - perhaps because the cost and time required to port over their existing rendering software is prohibitive, despite the gains, so simply throwing more processors at the problem is a cheap way to improve rendering performance.
OPCFW_CODE
A. Thall. Fast Mersenne Prime Testing on the GPU. In GPGPU-4: Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units, March 5, 2011, Newport Beach, CA2011. This is a CUDA-based implementation of Crandall & Fagin's IBDWT method for fast multiplication modulo Mersenne numbers. M. Rummel, G. Kapfhammer, and A. Thall. Towards the prioritization of regression test suites with data-flow information. In SAC '05: Proceedings of the 2005 ACM Symposium on Applied Computing, pages 1499--1504, New York, NY, USA, 2005. ACM Press. S. Pizer, P. T. Fletcher, S. Joshi, A. G. Gash, J. Stough, A. Thall, G. Tracton, and E. Chaney. A method and software for segmentation of anatomic object ensembles by deformable m-reps. Medical Physics, 32(5):1335--1345, May 2005 A. Thall. Deformable Solid Modeling via Medial Sampling and Displacement Subdivision. Doctoral dissertation, March 2004. Q. Han, C. Lu, S. Liu, S. Pizer, S. Joshi, and A. Thall. Representing multi-figure anatomical objects. In IEEE International Symposium on Biomedical Imaging (ISBI), P. Yushkevich, P. T. Fletcher, S. Joshi, A. Thall, and S. Pizer. Continuous medial representations for geometric object modeling in 2D and 3D. Image Vision Comput., 21(1):17--27, January 2003. Special issue on Generative-Model-Based Vision (GMBV2002). S. Pizer, P. T. Fletcher, A. Thall, M. Styner, Guido Gerig, and S. Joshi. Object models in multiscale intrinsic coordinates via m-reps. Image Vision Comput., 21(1):5--15, January 2003. Special issue on Generative-Model-Based Vision (GMBV2002). S. Pizer, P. T. Fletcher, S. Joshi, A. Thall, J. Chen, Y. Fridman, D. Fritsch, A. G. Gash, J. Glotzer, M. Jiroutek, C. Lu, K. Muller, G. Tracton, P. Yushkevich, and E. Chaney. Deformable m-reps for 3D medical image segmentation. Int. J. Comput. Vision, 55(2-3):85--106, 2003. S. Joshi, S. Pizer, P. T. Fletcher, P. Yushkevich, A. Thall, and J. S. Marron. Multi-scale deformable model segmentation and statistical shape analysis using medial descriptions. IEEE Transactions on Medical Imaging (TMI), 21(5):538--550, May 2002. S. Joshi, S. Pizer, P. T. Fletcher, A. Thall, and G. Tracton. Multi-scale 3-D deformable model segmentation based on medial description. In IPMI '01: Proceedings of the 17th International Conference on Information Processing in Medical Imaging, pages 64--77, London, UK, 2001. Springer-Verlag. Technical Reports and unpublished work A. Thall. Implementing a Fast Lucas-Lehmer Test on Programmable Graphics Hardware. Testing for Mersenne primes using Cg-shaders on NVidia GPU hardware (August 2007; addendum June 2009) A. Thall. Extended-Precision Floating-Point Numbers for GPU Computation. Implementing double-float and quad-float numbers using Cg. (March 2007; addendum July 2009) A. Thall. Fast C^2 interpolating subdivision surfaces using iterative inversion of stationary subdivision rules. UNC Chapel Hill Technical Report TR02-001. P. T. Fletcher, Y. Fridman, A. Thall, and D. Fritsch. SCAMP: A solid-modeling program using slice-constrained medial primitives for modeling 3D anatomical objects. UNC Chapel Hill Technical Report TR99-035.
OPCFW_CODE
Culminating Project Title Date of Award Culminating Project Type Computer Science: M.S. Computer Science and Information Technology School of Science and Engineering Dr. Maninder Singh Dr. Andrew Anda Dr. Aleksandar Tomovic Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. Keywords and Subject Headings Extractive Text Summarization, Text Summarization, Natural Language Processing, Machine Learning, Unsupervised Machine Learning We routinely encounter too much information in the form of social media posts, blogs, news articles, research papers, and other formats. This represents an infeasible quantity of information to process, even for selecting a more manageable subset. The process of condensing a large amount of text data into a shorter form that still conveys the important ideas of the original document is text summarization. Text summarization is an active subfield of natural language processing. Extractive text summarization identifies and concatenates important sections of a document sections to form a shorter document that summarizes the contents of the original document. We discuss, implement, and compare several unsupervised machine learning algorithms including latent semantic analysis, latent dirichlet allocation, and k-means clustering. ROUGE-N metric was used to evaluate summaries generated by these machine learning algorithms. Summaries generated by using tf-idf as a feature extraction scheme and latent semantic analysis had the highest ROUGE-N scores. This computer-level assessment was validated using an empirical analysis survey. Acharya, Swapnil, "Extractive Text Summarization Using Machine Learning" (2022). Culminating Projects in Computer Science and Information Technology. 39. I would like to sincerely thank my advisor Dr. Maninder Singh, Department of Computer Science, and Information Technology (CSIT) for his suggestions and continuous mentorship throughout the research and execution of its implementation. I would also like to express my gratitude to my committee member, Dr. Andrew Anda, Department of CSIT for his dedicated help in elevating several aspects of documentation of this research. I would also like to thank my committee member Dr. Aleksandar Tomovic, Department of CSIT for his constructive feedback and support throughout the research. I would also like to thank Mr. Clifford Moran, Department of CSIT for his continuous and timely help in setting up classes, registration, and scheduling research presentations. I would also like to thank all the members of the CSIT faculty at St. Cloud State University whose knowledge and expertise helped me to shape my academic career. I am very grateful for all my friends who helped me during the empirical analysis stage of this research. I would like to thank my family for their eternal support.
OPCFW_CODE
Then you should at most see an entry in the taskbar -- most of the window should remain hidden. For more details, check the list of supported toolboxes and ineligible programs. I really appreciate your help since my application was ok with 64 bits. I tried it with Matlab runtime component. navigate here These questions are too complex to fully answer in a comment thread like this one. It merely calls the script test (which is in the file test.m), allowing output to be displayed. Job outputs The hello_world.m script sends the output to standard output. by just copying directories for development machine to customer machine and letting the customer start the application with a batch file that sets paths to the copied directories? navigate here If you're not sure what login type to use, look here for how you login with HarvardKey. The standalone built with these runtime switches will always run with single thread and will not display graphics — in batch or interactive. Review the MATLAB Compiler Support page to be sure. Use Phong shading % and texture mapping to wrap the topo map data around the sphere. Join the conversation Toggle Main Navigation Products Solutions Academia Support Community Events Contact Us How To Buy Contact Us How To Buy Products Solutions Academia Support Community Events It should include things like the name of the classes on both ends, what java field name maps to what MATLAB field name. Peter Webb replied on November 23rd, 2010 8:50 pm UTC : 21 of 45 OysterEngineer, Currently the best way to deploy a single application to multiple platforms is to use Builder How To Compile Matlab Code If a wrapper is not used, myApp.m must be a command function. myStandalone.m is an optional application-dependent wrapper and is a command function. Hide source code for intellectual property protection To be compatible with the MATLAB mcc compiler, your program must meet a few requirements. Matlab Deploytool She writes here about MATLAB programming and related topics. This can be avoided if you compile your MATLAB application into an executable with the MATLAB mcc compiler. Apps MATLAB Compiler apps enable you to quickly access common tasks through an interactive interface. Generated Tue, 20 Dec 2016 13:32:27 GMT by s_hp84 (squid/3.5.20) Matlab Application Compiler Any idea? Matt W Royi replied on November 18th, 2010 6:04 pm UTC : 2 of 45 Thanks, exactly what I needed. The resulting stand-alone application is then executed using the special SSCC command named run_mcc. Han Oostdijk replied on November 26th, 2010 11:34 am UTC : 22 of 45 For completeness of the ‘dossier' I include the answer to my question #19 that I received from For example, dumping data to the base workspace or displaying text in the non-existent command window. Matlab Compiler Download The generated executable takes its name from the name of the first MATLAB function specified on the command line. -a topo.mat: -a stands for ''additional file''; it is typically used to Matlab Compiler Price Thanks and best regards Kyaw Mark replied on January 5th, 2012 3:58 pm UTC : 45 of 45 This is my situation: - I have created a Matlab standalone application exe Sree replied on January 25th, 2011 12:16 pm UTC : 29 of 45 Thank you Scott ! http://jdvcafe.com/matlab-compiler/matlab-compiler-runtime-713.html On Unix, you add it to LD_LIBRARY_PATH (except on the MacIntosh, which uses DYLD_LIBRARY_PATH). E.g. Before we submit the job, make sure that the directory Log exists on the current working directory. Matlab Compiler Example s=surface(x,y,z,props); set(gcf, 'Color', 'white'); % White background axis square axis off axis equal title('Hello, World.', 'FontSize', 14, 'FontWeight', 'Bold'); end Build this function into a standalone application with the MATLAB Compiler: Standalone executables are a convenient way to provide turn-key MATLAB-based solutions to your colleagues or end users. We strongly recommend that you do not run any version of the MATLAB Runtime older than R2014a on macOS Sierra 10.12. http://jdvcafe.com/matlab-compiler/matlab-compiler-runtime-7-13.html As you very correctly note, the interaction between shared libraries and their host environments is full of interesting questions. :-) We definitely recognize the need to make it easier to marshal I'm not sure if the difference between operating systems is such that that is unavoidable, but it would be great to be able to make executables (and find corresponding MCR installers) Matlab Compiler Online Try or Buy There are many ways to start using MATLAB Compiler. This should keep it mostly invisible. Share with those who may not use MATLAB by creating standalone applications. You can read about code generation for control design applications in Seth Popinchalk's Seth on Simulink blog. I'll post articles that will make it easier, and perhaps even more fun, to use the MATLAB Compiler and the Builders. Matlab Compiler Sdk Joan You might want to take a look at the slides from MVC2010 on the subject: https://docs.google.com/present/edit?id=0AbvW_wUON0Y0ZGh6aGszaF82aGJ2czg2dzQ&hl=en Peter Webb replied on November 18th, 2010 7:27 pm UTC : 4 of 45 I am still in old-school mode since I use everyday mcc instead of deploytool. ******************************************** And the video tutorial (made by Adam Leon) shipped with ML Compiler is effective : in Do I need the C/C++ compiler installed? Tutorial files Let us say you have created the standalone binary hello_world. http://jdvcafe.com/matlab-compiler/matlab-compiler-runtime-8.html Large-scale deployment to enterprise systems is supported through MATLAB Production Server™. Redirect it to a file for batch jobs.
OPCFW_CODE
As users now expect applications to provide live updates, realtime chats, and dynamic content, modern web development requires the ability to deliver realtime experiences and communication between users and servers. WebSocket is a communication protocol that allows full-duplex two-way communication between a client and a server over a single TCP connection. The connection persists unless it is explicitly told to disconnect or in case of network or server failure. WebSockets allow developers to fulfil the need to integrate realtime features in their applications. With WebSockets, React applications allow users to receive updates and notifications without any noticeable delay. Components can easily subscribe to WebSocket events and update the UI in response to incoming data or events, such as new chat messages or live notifications while maintaining thousands of connections. Though WebSockets allow full-duplex communication with low latency, we have other alternatives available which can be used when a delay in communication is acceptable or bidirectional communication is not required. A few of these WebSocket alternatives include HTTP long polling, Server-sent events (SSE), and WebRTC. - HTTP long polling: The client sends an HTTP request to the server and the request stays persistent until the server has data to send to the client or a timeout occurs. The client processes the received data and sends another connection request. - Server-sent events (SSE): The client initiates a connection by sending an HTTP request to the server. The server keeps the request open and sends updates to the client whenever an event happens. - WebRTC (Web Real-Time Communication): It is an open source project and collection of technologies and protocols that allow realtime peer-to-peer communication. In this article we will explore the top WebSocket libraries that can be used in React applications and their pros, cons, and use cases. We'll also discuss the crucial factors to consider when choosing the best WebSocket library for your React project. Top four WebSocket libraries for React React useWebSocket is specifically designed for React applications which provide robust WebSocket integrations to React Components. - Idiomatic React integration: useWebSocket is specifically designed for React applications and provides idiomatic support to integrate WebSockets in React applications. - State management: It provides built-in state management for socket connections. - Socket.IO support: In addition to plain WebSocket, it also supports Socket.IO connections. - Limited to client side: React useWebSocket is a React library and requires Socket.IO or plain WebSocket implementation to add realtime functionality on the server side. Use cases: React useWebSocket can be used to add realtime features like communication, server and client updates, and dashboards. However, this is a new library and doesn’t have data on what companies are currently using it. - Automatic reconnection: Socket.IO periodically checks the connection between client and server and if the connection is interrupted, it automatically reconnects. - Old browser support: Socket.IO supports HTTP long-polling which ensures that Socket.IO applications will work even in old browsers that do not support WebSockets. - Server Compatibility: Socket.IO is designed to work with Node.js on the server side. This limits the choice of server technology if you are not using Node.js. - Not guaranteed exactly-once messaging: Socket.IO provides an at-most-once guarantee, meaning that a message may be delivered zero or one times. Use cases: Chat applications, realtime player interaction, realtime data dashboards are a few use cases of Socket.IO. Companies that use Socket.IO include: - Slack: Slack uses Socket.IO to enable realtime communication between users. - Twitter: Twitter uses Socket.IO to enable realtime updates to user feeds, notifications, and search results. - Google Docs: Google Docs uses Socket.IO to enable realtime collaboration between users on documents. - Uber: Uber uses Socket.IO to enable realtime tracking of rides and driver availability. - Fallback mechanism: SockJS provides a fallback mechanism that switches to other transport protocols if WebSockets is not supported in a browser. - Not compatible with plain WebSocket servers: SockJS client is an emulation of WebSockets and cannot connect to plain WebSocket servers. You need to implement SockJS at the backend to connect the client to the server. - Not designed specifically for React: SockJS is a browser WebSocket library which is not specially designed for React applications. This could result in a less seamless integration as compared to libraries specially designed for React. Use cases: SockJS can be used to develop realtime features in React applications including realtime communication, notifications, dashboards, live feeds etc. Companies that use SockJS include: - Pipedrive: Pipedrive uses SockJS to provide realtime sales pipeline and contact information to the salespeople. - IWB: IWB uses SockJS to enable realtime collaboration between its users. WS is a highly scalable, fast, and easy to use WebSocket library. It has a large active community on GitHub. can use WebSockets, HTTP long-polling, or WebTransport, depending on the browser and the environment. - Fast: WS is a fast WebSockets library. - Fallback mechanism: WS can use WebSockets, HTTP long-polling, and Web transport depending on the environment. - Large community: WS has a large community of users. - Difficult configuration: WS provides a lot of flexibility and it might be difficult to configure it when developing large applications. Use cases: Like other libraries, WS can also be used for realtime communication, multiplayer interaction, live dashboards etc. FreeCodeCamp uses WS to add realtime features such as instant feedback and progress tracking to its platform. Choosing the best WebSocket library in React Choosing the best WebSocket library depends on your needs. Here are a few key considerations that can help you to pick the best library for your project: - Project requirements: Every project has its own requirements. For example, you might be looking to prioritise performance, or limit your project's complexity. Additionally, certain libraries are more suited to certain use cases. For instance, Socket.IO is excellent for chat apps, while React useWebSocket simplifies realtime dashboards. - Library limitations: Each library comes with its limitations - as we have explored above such as the challenge of configuration in WS and the at-most-once message guarantee in Socket.IO. It is essential to not only consider these limitations, but also the trade-offs that you might have to make between things such as browser support, error handling, and ease of use. - React compatibility: Some developers prefer WebSocket libraries that are specifically designed for their tech stack to gain better control over its implementation. React useWebSocket is specifically designed for React, whereas libraries such as Socket.IO and WS aren’t. - Library community and maintenance: Active communities and regular updates indicate a maintained library. Consider the library's GitHub repository, support channels, and resources available online. These resources will help you debug your code if you get stuck. In the case of Socket.IO, SockJS, and WS, all three libraries have active communities on GitHub, each with a substantial number of stars: Socket.IO with 59k stars, SockJS with 8.3k stars, and WS with 20k stars. Active community reflects ongoing efforts to improve these libraries, ensuring an enhanced developer experience over time. Ably: An easier way to deliver realtime experiences with React Ably provides a serverless WebSockets solution for building realtime experiences. It is a highly scalable and reliable realtime infrastructure platform which offers easy to use client and server APIs that allow developers to develop applications that communicate in realtime. For React, Ably offers hooks to streamline the process of realtime communication in React applications without having to worry about infrastructure implementation. Using Ably for implementing realtime features is as simple as installing it, subscribing to a channel and publishing realtime messages. Try it for free today! WebSockets, with their ability to enable full-duplex, bi-directional communication have become a go-to solution for building realtime experiences. The libraries outlined allow the integration of WebSockets in React applications - but they each come with their advantages, and disadvantages. It is important to evaluate which is the right one - and if you should consider using an alternative approach, such as Ably.
OPCFW_CODE
import logging from mllaunchpad import ModelInterface, ModelMakerInterface import pandas as pd from sklearn import tree from sklearn.metrics import accuracy_score, confusion_matrix logger = logging.getLogger(__name__) # Train this example from the command line: # python -m mllaunchpad -c complex_cfg.yml train # # Start REST API: # python -m mllaunchpad -c complex_cfg.yml api # # Example API call: # http://127.0.0.1:5000/guessiris/v0/somethings?x=3&sepal.length=4.9&sepal.width=2.4&petal.length=3.3&petal.width=1 def data_prep(X): # prepping features, maybe parsing strange formats, imputing, cleaning text ... return X class MyModelMaker(ModelMakerInterface): """ """ def create_trained_model(self, model_conf, data_sources, data_sinks, old_model=None): # demo: get the database data source limit = model_conf["train_options"]["num_ora_rows"] dbdf = data_sources["panel"].get_dataframe(params={"limit": limit}) print(dbdf) # just for lolz number_to_add = model_conf["train_options"]["magic_number"] my_lame_predictor = lambda x: x + number_to_add # train a tree as a demo df = data_sources["petals"].get_dataframe() X_train = df.drop("variety", axis=1) y_train = df["variety"] # optional data prep/feature creation/refinement here... my_tree = tree.DecisionTreeClassifier() my_tree.fit(X_train, y_train) # just to demo that we can save some stuff, too (works throughout test/train/predict) data_sinks["some_data_with_strict_types"].put_dataframe( pd.DataFrame({"col_a": ["a", "b", "c"], "col_b": [1, 2, 3], "col_c": [1.1, 2.2, 3.3]}) ) model = {"lame_pred": my_lame_predictor, "petal_pred": my_tree} return model def test_trained_model(self, model_conf, data_sources, data_sinks, model): df = data_sources["petals_test"].get_dataframe() X_test = df.drop("variety", axis=1) y_test = df["variety"] my_tree = model["petal_pred"] y_predict = my_tree.predict(X_test) acc = accuracy_score(y_test, y_predict) conf = confusion_matrix(y_test, y_predict).tolist() metrics = {"accuracy": acc, "confusion_matrix": conf} # just to demo that we can load some dtyped data here as well: print(data_sources["some_data_with_strict_types"].get_dataframe()) return metrics class MyModel(ModelInterface): """Does some simple prediction """ def predict(self, model_conf, data_sources, data_sinks, model, args_dict): logger.info("Hey, look at me -- I'm carrying out a prediction") # Do some lame prediction (= addition) x_raw = args_dict["x"] # optional data prep/feature creation for x here... x = data_prep(x_raw) name_df = data_sources["first_names"].get_dataframe() random_name = name_df.sample(n=1)["name"].values[0] lame_predictor = ModelInterface()["lame_pred"] y = lame_predictor(x) # Also try iris petal-based prediction: petal_predictor = model["petal_pred"] X2 = pd.DataFrame( { "sepal.length": [args_dict["sepal.length"]], "sepal.width": [args_dict["sepal.width"]], "petal.length": [args_dict["petal.length"]], "petal.width": [args_dict["petal.width"]], } ) y2 = petal_predictor.predict(X2)[0] return {"the_result_yo": y, "random_name": random_name, "iris_variety": y2}
STACK_EDU
See Octave docs on functions for more on that. –hoc_age Oct 7 '14 at 12:59 @user3460758 Do you have any clue why you are seeing "This is GNU Emacs It works perfectly when I use it without AJAX but as soon as I set remote to true rails throws an ActionDispatch::ParamsParser::... through quad("f",0,3)) into a file of any name (e.g., SimpsonsRule.m) and invoke it from the shell (bash or whatever, not octave prompt) as octave SimpsonsRule.m and it will work. This is essentially a peak finiding program. > > > function gn = groupindcnt(filename); > > > f = dlmread(filename); > lambda = f(2:end,1); > power = f(2:end,2:21); > runmean = have a peek here This behaviour was detected in bash 4.2.39 and zsh 5.0.0 on a x86_64 system. python xml odoo-9 parse-error modified Sep 10 at 10:56 K. parsing haskell parse-error answered Sep 10 at 13:09 Eduard 508 0 votes 1answer 66 views How can I solve this ParseError related to Odoo 9? Asked by Maheen Maheen (view profile) 6 questions 0 answers 0 accepted answers Reputation: 1 on 5 Jul 2014 Latest activity Answered by Image Analyst Image Analyst (view profile) 0 questions Worked a treat, thanks so much Jaroslav and Laurent. Means you forgot an open or close bracket. I'll try to submit a patch to the emacs-devel list again for recognising single-quoted strings in octave-mode. Join the conversation Toggle Main Navigation Log In Products Solutions Academia Support Community Events Contact Us How To Buy Contact Us How To Buy Log In Products Solutions Academia Support Community Learn MATLAB today! You might need to rework the function definition so that it expects the correct thing. Based on your location, we recommend that you select: . Matlab Function Definitions Are Not Permitted In This Context norm(delta_step,2)<=0.04 ]; % 2nd argument of solvesdp is a for loop followed by norm(residuals,inf) solvesdp(constraints,... c parsing parse-error answered Aug 23 at 1:42 dasblinkenlight 458k39494846 -3 votes 1answer 29 views Deleted WordPress Site by editing PHP home page template, is it lost? [duplicate] Parse error: syntax How To Remove Parse Error In Matlab We are making improvements to UA, see the list of changes. Select Only Printed Out Cells Why would breathing pure oxygen be a bad idea? https://www.mathworks.com/matlabcentral/answers/140438-what-does-parse-error-basically-mean Hi, I had worked out it was the IF statement prior to posting by using the method of commenting out pieces of code. Then it shows the “Premature end of file". Parse Error Meaning In Hindi ME.stack(1).name, ME.stack(1).line, ME.message); fprintf(1, '%s\n', errorMessage); uiwait(warndlg(errorMessage)); end The error here is a string in ME.message. function gn = groupindcnt(filename); f = dlmread(filename); lambda = f(2:end,1); power = f(2:end,2:21); runmean = aver(lambda, power); %This for loop will loop over all the currents/columns in power for crnt = if ... Parse it means that you might look at the error and try to recognize certain words and take action based upon what you find. Hi i'm making a main menu and came over this Error: (expecting EOF, found 'else'.) at line (19,9) it doesnt make sense to me but heres the script: var IsQuitButton = Parse Error Matlab Definition And it works for me. Parse Error At Function I can't seem to get by it. tr command has no effect when used in $() and saved in a variable Where's the 0xBEEF? navigate here The file that the function is in is called LogAnalysis.hs and the ... An Error Occurred Unable to complete the action because of changes made to the page. What does the image on the back of the LotR discs represent? Parse Error For Loop Matlab more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed If you can get it working that's even better. More on this below. Check This Out or, use a Database, atomicity is what they do. Regards, Sergei. _______________________________________________ Help-octave mailing list [hidden email] https://www-old.cae.wisc.edu/mailman/listinfo/help-octave dirac Reply | Threaded Open this post in threaded view ♦ ♦ | Report Content as Inappropriate Unexpected Matlab Operator I'm new to odoo and I'm trying to build a module using the documentation of odoo 9. Join them; it only takes a minute: Sign up Parse Error compiling Octave script on Ubuntu up vote 1 down vote favorite I'm not sure what this error is in relation Meaning that the parser found your end curly with out the open which closed the class and the parser doesn't except anymore code after that...but yours obviously does with the 'else'. You can also select a location from the following list: Americas Canada (English) United States (English) Europe Belgium (English) Denmark (English) Deutschland (Deutsch) España (Español) Finland (English) France (Français) Ireland (English) Words that are both anagrams and synonyms of each other Should I use "teamo" or "skipo"? The Expression To The Left Of The Equals Sign Is Not A Valid Target For An Assignment. Extreme Value Theorem on Manifold How do you say "you all" in Esperanto? php parse-error modified Sep 19 at 15:55 DoNotDownvote_JustUpvote 82 1 vote 2answers 95 views Haskell *** Exception: Prelude.read: no parse Hi I am trying to complete CIS194 Spring 13 when I Sorry for that but how can i correct this code ? It's showing up especially when leading spaces are left out. this contact form Posted on October 15, 2014 Create a font stroke in Gimp Posted on November 26, 2015 Arch Linux: install Redmine 1.3 with SQLite Posted on January 7, 2012 Convert accent characters to output array C you would write disp(C); )Then calculate sin(A) and store the resulting array in a new array named D. I would say this is a bug. Are illegal immigrants more likely to commit crimes? I'll > be more diligent > with the brackets before posting in the future. > > Regards > Martin > > ----- The point is not parenthesis in particular. I just read through that quickly and I think what you're saying is that the .m file should have the same name as the function? ossama 528 0 votes 0answers 40 views MyBatis Batch Insert Error the insert SQL However, I will need to use pass by reference on some of the data passed to the function. Why are planets not crushed by gravity? Related Content Join the 15-year community celebration. php parse-error modified Oct 2 at 12:26 marc_s 454k938711033 -1 votes 3answers 38 views PHP parse error, cant see it Error: parse error line 8 not expecting ',' I just don't for i = 1:1:9000 if (sign*(data(i+499)-runmean(i,crnt)) > 0) sign = -1*sign; cross(i) = 1; n = n + 1; If you know some good reason for that behaviour, feel free to leave a comment. java android simpledateformat parse-error modified Oct 19 at 12:08 lhuber 535 2 votes 2answers 771 views Rails 4 remote form issues bad request (ParamsParse::ParseError) I have a problem with my remote
OPCFW_CODE
So the system rebooted at about 9:30 a.m. I don’t know why yet. It may have been a node reboot. I’m looking into it. At the time, it may have been in the process of migrating a cPanel backup as a test. That failed without an error several times, and then seemed to complete, but without any features enabled at all on the migrated site. The system them gave me a warning that the configuration hadn’t been checked since the most recent update (there was no update that I know of). Running the configuration check results in the error The feature *Administration user* cannot be disabled, as it is used by the following virtual servers : followed by the two previously created virtual servers (which are still working, oddly enough). Next the system went into post-install configuration, again for reasons unknown at this time. When it got to the plain text / hashed password option, it returned the error Cannot write to directory /etc/webmin/virtual-server/ Which also comes up when I attempt to fix things using suggestions in previous posts of this nature. The dashboard also isn’t working in either Webmin or Virtualmin. I wound up reinstalling the OS and Virtualmin because there were other things going on at the same time that made it impossible to diagnose the problem, and the server wasn’t in production status anyway. Were you migrating a cPanel site when the problem occurred? Has anyone found a solution for this? I have an active system which was working great… also Rocky Linux. I did have to do a network change, so all new IP addresses. That went as smooth as it could, but then a few months later, after a reboot, recheck virtualmin popped up. I can’t write to /etc/webmin/virtual-server/ but the directory positively exist. I do also get a message when trying to check the features. When I work on those it says Administration user cannot be disabled because is using it. Oddly, there is no place to check or uncheck that feature. Seems it is enabled by default. Thanks for any help. LetsEncrypt has disappeared in the process and I’m getting closer to expirations. I have had the same thing happen, no idea what has changed, nothing i have done myself. traced down the problem to the following file: /etc/webmin/virtual-server/config renaming it, then copied back a version from another system → reboot system, Virtual min may now have some weird things in the template settings, though at least it works again. For me this happened on a system that only is used as a slave-nameserver for virtualmin that has been running since 2019. the config on my 3 NS nodes is identical, so i do not have problems after doing this. If you try this, be careful, you may break your virtual domains doing this!.
OPCFW_CODE
Azure RTOS is an embedded development suite including a small but powerful operating system that provides reliable, ultra-fast performance for resource-constrained devices. Azure RTOS include 5 components. In project, it is compiled as 5 libs 2 Basic Threadx Structure Below picture shows the initialization process of Azure RTOS. 2.1 main function In main function, we can set up the processor, initialize the board. For most applications, the main function simply calls tx_kernel_enter, which is the entry into threadx. This function is the first ThreadX function called during initialization. It is important to note that this routine never returns. The processing of this function is relatively simple. It calls several ThreadX initialization functions, calls the application define function, and then invokes the scheduler. 2.3 Application Definition Function The tx_application_define function defines all of the initial application threads, queues, semaphores, mutexes, event flags, memory pools, and timers. It is also possible to create and delete system resources from threads during the normal operation of the application. However, all initial application resources are defined here. When tx_application_define returns, control is transferred to the thread scheduling loop. This marks the end of initialization. 2.4 your thread entry Every thread is just an infinite loop, that is what your code mainly about. 2.5 important files tx_api.h : C header file containing all system equates, data structures, and service prototypes. All application files that use ThreadX api should include this header file. tx_port.h : C header files containing all development-tool and target specific data definitions and structures. 3 Build a ThreadX application There are four steps required to build a ThreadX application. 4 Simple demo unsigned long my_thread_counter = 0; /* Enter the ThreadX kernel. */ void tx_application_define(void *first_unused_memory) /* Create my_thread! */ tx_thread_create(&my_thread, "My Thread", my_thread_entry, 0x1234, first_unused_memory, 1024, 3, 3, TX_NO_TIME_SLICE, TX_AUTO_START); void my_thread_entry(ULONG thread_input) /* Enter into a forever loop. */ /* Increment thread counter. */ /* Sleep for 1 tick. */ 5 Debug with TAD The Thread Aware Debugging (TAD) tool is a useful and powerful instrument to debug the Azure RTOS application. With MCUXpresso IDE 11.4.1, we can show the Azure RTOS ThreasX TAD views. We take the evkmimxrt1060_threadx_demo as example. Theads view shows the threads in a table, we can see the priority, state, stack usage for each thread. Message Queues view shows the message queues in a table Semaphores view shows the semaphores in a table Suspended means the threads that are suspended and waiting for access to current resource. Mutexes View shows the information about mutexes used inside the application Event flags view shows the event flags in a table. Suspended : threads that are suspended and waiting for access to current resource.
OPCFW_CODE
Please find attached the planning document of the project. The developer must be well versed with excel VBA with the use of class modules. 29 freelance font une offre moyenne de ₹69515 pour ce travail Hello, I have gone through your job posting and become very much interested to work with you. I am an expert in this field. I have already completed several projects like this. For evidence you can see my profile. Pl Plus Hello, my name is Cristian, i am Excel/VBA expert. I am in the top 5 of freelancers in this area and i have more than 240 projects successfully completed here. i work alone, i do not have a team or outsource the proje Plus Hi, I am financial experts and a guru in Excel with extensive knowledge and expertise in advanced formulas, Pivot Tables, Tables, and Visual Basic Application including preparation of available interfaces, dynamic dash Plus Hi, Nice to know your requirement. I am an excel vba, vb6 professional and have delivered over hundred excel vba projects in the last one year. Its all doable. However, will need a review phase of a day/two to finali Plus Hi, My name is Virendra. I am experienced Data Analyst and Macro Developer. I have read your instructions carefully from attached document. I will use class module in excel vba to create procedures. I have excelle Plus Hi. Sir Thanks for your post. I am interested in your project. I am a expert who have many experiences in Excel, Vba If you hire me, I can complete this project in short time as you want. I can start working right now. Plus Dear Client, I am expert in VBA. I have 5+ Years of Experience in EXCEL (VBA and MACROS), Web Scrapping (Python, Octoparse, Parsehub,Selenium and Beautifulsoup), Data Entry, Product Listing and Web Research. I will be Plus "I have 10+ years of experience working with Microsoft Excel (+VBA Macros development), PowerPoint, Word. I am also very good at capturing data from internet and organizing it in a way which becomes very easy to retrie Plus I have been working as a typist and developer with LabView, python, Java, office,MATLAB, vb, c, c++,web programs for 7+ years. In these years, I have many experiences in these branches. Hi, This is Vipin, an alumnus of IIT Kanpur. I have 6+ years of experience in customized software, web and mobile app development and has expertise in Android, iOS, Python, Magento, PHP, HTML, Java, Angular and Ioni Plus Dear Hiring Manager, Thank you for the opportunity to apply for the React Developer role at your company. After reviewing your job description, it’s clear that you’re looking for a candidate that is extremely familiar Plus ⭐⭐⭐Dear sir!⭐⭐⭐ ✅I am very interested in your project and I am exciting. ✅I read your project details carefully and I thought that I am the best fit developer for your project. ✅I have rich experience with your project Plus Hello, my name is Aboubacar Nimaga, I am interested in your project, must also believe that I am qualified for such work because the time will be taken into account and the price is also important. I am interested in your project. I am an Excel expert and have a great experience in excel automation with formulas and VBA since my first career 20 years ago. I assured that you will get high quality / high accuracy / Plus Hello I’m from Malaysia and I have found your project and wish to state that I have the required expertise to complete the task. I hope we can collaborate as a good teamwork. I assure to complete the job as soon as pos Plus Hi, I can do the project just like you described. I have a very good command of VBA. I Already accomplished such a project that integrates Excel with Oracle EBS R12 and have dynamic interactive ribbons and men Plus Hi, I have gone through the requirements and very sure to assist you with same. Kindly initiate the message box so that we can have a quick call/chat to discuss. Best Regards, Kabir Infocom Pvt Ltd I am all free to start working. I am good in Excell and I will finish task on time & within Budget. Let me know further details to start your task now. Highly experienced in Financial analysis and preparations,data processing,business analysis,data entry,excel advanced functions,financial modeling,business tools etc..
OPCFW_CODE
# -*- coding: utf-8 -*- """ Created on Wed Apr 3 10:34:54 2019 @author: YSu """ import numpy as np import math def fillgap(Q,d): #The first step is to check where the gaps are gaps_location=np.argwhere(np.isnan(Q)) if len(gaps_location)==0: print('There is no gap') else: gaps=Q.isnull().astype(int).groupby(Q.notnull().astype(int).cumsum()).sum() list_of_gaps = gaps.loc[~(gaps==0)] largest_gap=np.max(gaps) if largest_gap>=d: print('More cleaning up') else: Index_correction=0 for i in range(0,len(list_of_gaps)): Num_NaNs=list_of_gaps.values[i] Num_previous_point=math.ceil(Num_NaNs/2) for j in range(0,Num_NaNs): gap_location=list_of_gaps.index.values[i]+Index_correction if j <=Num_previous_point: Q.iloc[gap_location]=Q.iloc[gap_location-1] Index_correction=Index_correction+1 else: Q.iloc[gap_location]=Q.iloc[gap_location+Num_NaNs+1] Index_correction=Index_correction+1 gaps_location_2=np.argwhere(np.isnan(Q)) if len(gaps_location)==0: pass elif len(gaps_location_2)==0: print('gaps are filled') else: print("there are still gaps") print(gaps_location_2) return Q
STACK_EDU
How to Deploy a Contract to the Secret Network on Windows Jul 28, 2023 This article is for beginners who want to learn how to deploy their first contract on a Secret Network. If you don't know what Secret Network is, it is a blockchain on Cosmos that solves the privacy problem. Other blockchains are public and anyone can see your transaction history and everything if they know your address. On Secret Network, no one sees what you don't want them to see, and that's very interesting. Secret Network is definitely one of the most interesting projects on Cosmos in my opinion and it's definitely worth finding out more about. I am not going to explain what Secret Network is in this article. It is described very well on the Secret website. The purpose of this article is to help you deploy a smart contract on testnet. So let's roll up our sleeves and get started. The first thing you need is an IDE to use for development. There are many choices. I recommend installing the Intellij IDEA and the Rust plugin. Keep in mind that Secret's smart contracts are written in Rust, in a framework called CosmWasm. It's worth checking out their site, they have great documentation. If you are not familiar with Rust, you will need to learn it. For learning Rust, I recommend my favourite channel Let’s Get Rusty. If you are an experienced developer, use the IDE of your choice. If you are not sure what to choose, install Intellij IDEA and the Rust plugin by JetBrains. Rust and Wasm You will also need Docker as well for the local Secret Network. Download and install Docker from here. Now you should have everything that you need. Open PowerShell and clone the smart contract example. This is a working contract. In the next article we will explain how to program contracts, today we will explain how to compile and deploy them. Once you have the repo, we need to compile the code. Use the command: Then go into the Makefile and edit "build-mainnet-reproducible". Let's use the latest Secret Contract Optimizer. Change from 1.0.8 to 1.0.9. Then run the command: If you have any problems in PowerShell, you need to run it in WSL2. And then enable Ubuntu in Docker Settings -> Resources -> WSL Integration. Do not forget to use the "sudo" command when using WSL, otherwise you will get an error message with permissions. The output of this command will be a zipped, optimized build, ready to be stored on the Secret Network. Check your project root. There should be a file called contract.wasm.gz. Let's unpack it with the command: Now it's time to use secret cli. But we have not installed it yet. Let's do it. Go to this page and download the latest version. Put it in a file that you do not intend to remove in the near future. Rename the file to "secretcli". Go to the Windows Environment Properties and add the path to the 'secretcli' to the 'Path'. Then restart your PowerShell and if you have done everything correctly, you should not get any error when you run the following command: Well done! You are now ready to send your first contract to testnet. The first thing you need to do is to tell testnet that we want to communicate with it. Use this command: Now create the wallet. Use this command: Instead of "my_wallet" you can choose any name you like. If you did it correctly, you should see your wallet after running the following command: Everything costs some fees, this is the same on Secret as on other blockchains. So we need a test SCRT for deployment. Go here and send tokens to your wallet. Enter the address of your newly created wallet. It starts with "secret1...". Time to deploy our contract. Let's deploy it. Now let's verify it. Go here and check that you have two transactions in your wallet. The first should be receiving tokens from Faucet. The second should be a transaction of type "Stored Contract". Below is what you should see: Now we need to initialize the contract. Use this command: Instead of "code_id", use the code ID of your contract. You can find the ID in the transaction history on Blockchain Explorer. Now the contract is deployed and initialized on testnet. Use this command or check the transaction history to find out the contract address. So you have done it! You have created your first contract on the Secret Network! Well done! Now you are ready to start developing. Next time we will develop a simple dApp. Happy coding!
OPCFW_CODE
The influence of environmental variation on the microbiome during early-life stages in reptiles Type of DegreePhD Dissertation Restriction TypeAuburn University Users MetadataShow full item record My dissertation research seeks to understand how environmental variation, including maternal effects, might influence the microbiome of reptiles and how those differences translate to phenotypic variation. My research framework integrates both observational and experimental science through field and lab-based methods. Documenting environmentally mediated changes in the microbiome and their effects on hosts will provide a robust foundation for understanding the role of microbiome plasticity in shaping host phenotypes, including growth, physiology, and behavior. For my first chapter, I sought to understand how gut homeostasis is influenced by environmental variation (in the form of aquatic pollutants like estrogen). I experimentally assigned 23 hatchling American alligators (Alligator mississippiensis) to three ecologically relevant treatments (control, low, and high estrogen concentrations) for ten weeks. Gut microbial samples were collected following diet treatments and microbial diversity was determined using 16S rRNA gene-sequencing. Individuals in estrogen-treatment groups had decreased microbial diversity, but a greater relative abundance of operational taxonomic units than those in the control group. This effect was dose-dependent; as individuals were exposed to more estrogen, their microbiota became less diverse, less rich, and less even. Findings from this study suggest that environmental contamination can influence wildlife populations at the internal, microbial level, which may lead to future deleterious health effects. For my second chapter, I sought to effectively sample and manipulate the microbiome of eggshells. Although most vertebrates are oviparous, little is known about microorganisms on the surface of eggshells and their functions, particularly on eggs of non-avian reptiles. I developed a novel method to effectively sample (i.e., whole-egg sonication) and manipulate the eggshell microbiome of non-avian reptiles while minimizing contamination from external sources. Overall, my results provide useful guidelines for future manipulative studies that examine the source and function of the eggshell microbiome. For my third chapter, I experimentally manipulated the maternal gut microbiome using antibiotics and evaluate consequences on offspring phenotype in the brown anole lizard (Anolis sagrei). DNA was extracted from maternal gut tissue and cloacal samples and sequenced at the 16S rRNA gene. Eggs were incubated and embryo/hatchling phenotypes were recorded (e.g., survival, hatchling morphology). I found that treatment mothers had reduced gut microflora diversity and produced larger eggs/hatchlings than control mothers. Findings from this study provide new insight into the role of maternal gut microbiota and its potential functional significance on offspring. For my concluding chapter, I conducted a systematic review on vertical transmission of microbiota in non-human animals. I found that many studies examining vertical transmission of microbiomes fail to collect whole microbiome samples from both maternal and offspring sources, particularly for oviparous vertebrates. An ideal microbiome study incorporates host factors, microbe-microbe interactions, and environmental factors. Together, results from my dissertation suggest that the gut microbiome is highly influenced by environmental variation, including maternal effects, in ways that may affect offspring fitness. As evolutionary biologists continue to merge microbiome science and ecology, examining microbiomes in oviparous taxa may provide insight into how microbiota shape host phenotypes.
OPCFW_CODE
BERT is high performing across a wide range of NLP tasks, but it is also very, very large. Its size can make it impractical for modeling teams to use in their constrained environments. In response, compression is increasingly important for anyone intending to use BERT. But most of these compression techniques are limited in their ability to provide practical guidance on the tradeoff between model architecture size and model performance. In this technical case study, SigOpt ML Engineer, Meghana Ravikumar addresses this shortcoming by applying Multimetric Bayesian Optimization to distill BERT for Question Answering tasks using SQUAD 2.0. In yesterday’s presentation, she presented the results from her experiment, including these primary takeaways: - Meghana uncovered a configuration of BERT that was 22% smaller than the baseline model (in number of parameters), but retained a similar level of accuracy (~67% Exact) using SigOpt Multimetric Bayesian Optimization - Meghana tracked and visualized her runs through the process using SigOpt Experiment Management, which made it quicker to establish a viable baseline and easier to develop intuition on the model’s behavior - There are a wide variety of practical implications for running experiments like this to uncover ways to use BERT for real-world modeling tasks And here is a more specific summary of the presentation. Click through to view any segment you missed: - Background on BERT, various distillation techniques and the two primary goals of this particular use case – understanding tradeoffs in size and performance for BERT (0:48) - Overview of the experiment design, which applies SigOpt Multimetric Bayesian Optimization to tune a distillation of BERT for SQUAD 2.0 question answering tasks and tracks progress through training and tuning with SigOpt Experiment Management (2:08) - Deeper explanation of distillation in the context of NLP and BERT, how it is used to train a smaller student model from a larger teacher, and the setup for the hyperparameter optimization process (3:30) - Process for defining the student model and approach to creating a baseline for the hyperparameter optimization experiment, with baseline values of 67.07% for Exact as a baseline measurement of accuracy and 66.3M Parameters as a baseline measurement of size (5:25) - How SigOpt automates the hyperparameter optimization process and automates Multimetric Bayesian Optimization more specifically to evaluate these competing metrics for size and accuracy (6:49) - Establishing a Metric Threshold to focus the optimizer on the parameter space that is above 50% accuracy (8:58) - Overview of parameters to be optimized, including training parameters, architecture parameters and distillation parameters, and the optimization loop itself (9:55) - Cluster orchestration setup and how it is initialized with Ray Core to facilitate at-scale distribution of the tuning job in parallel (11:31) - Review of results, including all configurations of the model trained by SigOpt through the optimization run in its trade off between exploration and exploitation (11:58) - Analysis of specific optimal points on this Pareto Frontier of results that were displayed in the SigOpt dashboard at the end of the optimization run, including a model configuration that retains accuracy and reduces model size by 22.47% (12:42) - Evaluation of the tuning job in the SigOpt dashboard, including comparisons of metrics, parameter importance, and parallel coordinates(13:58) - Analysis of the model’s performance on specific Question Answering topics to more deeply understand model behavior (15:22) - Summary of conclusions from this process, including the value of Multimetric Bayesian Optimization for evaluating these tradeoffs between metrics (18:47) - The most interesting trend from the optimal architecture is that heads stayed constant and layers varied, in response to a question from the audience (19:38) - Discussion around how optimizing two metrics at once was performed within SigOpt automatically and without any additional effort from Meghana (20:19) To recreate or repurpose this work please use this repo. Model checkpoints for the models in the results table can be found here. The AMI used to run this code can be found here. To play around with the SigOpt dashboard and analyze results for yourself, take a look at the experiment. Below is a screenshot from the SigOpt dashboard. You can also watch the recording or share it with your colleagues. If you’re interested in learning more, follow our blog or try our product. If you found Experiment Management particularly compelling, join the private beta to get free access. Use SigOpt free. Sign up today.
OPCFW_CODE
During last contest I experienced random timeouts although my code is used to reply within 4 ms out of 50 ms. So I decided to investigate. The first lead to explore was to check my code. The timeout seemed to happen at different locations depending on the game but for a given game, it seemed it always happened at the same location. So I printed timestamps to have an idea where it comes from. And for the first game I studied, the timeout came from a basic stream().collect() in order to append strings. This statement took 40 ms to execute which was very strange. I decided to replace it by a loop but it didn’t solve the timeout. I tried to study other games and each time the timeouts came from different basic statements. Then I started to explore a second lead. The garbage collector. Maybe a garbage collection was triggered from time to time and suspended the player run. In Java it is possible to get the number of GC with this code: “ManagementFactory.getGarbageCollectorMXBeans().get(i).getCollectionCount()”. The result was very fast to analyse since there were no GCs during the whole game (at least nothing reported by this method). So it seemed that the garbage collection was not the issue. Edit: actually there were two garbage collections; see my additional comment below. At that moment it remained only one last lead in my mind. The JIT compilation. In Java the time spent in JIT compilation is available with “ManagementFactory.getCompilationMXBean().getTotalCompilationTime()” (if supported by the JVM). I decided to print timestamps and JIT compilation time in my simulation loop (each simulation is supposed to take almost the same time). Here is the result of the first round. iteration / timestamp in µs / JIT compilation time in ms (from the beginning) 1 / 14952 / 228 2 / 16747 / 229 3 / 19466 / 231 4 / 19739 / 231 5 / 19960 / 231 6 / 20201 / 231 On iteration 4, 5, 6, there are no JIT compilations and each simulation lasts around 250 µs. On the other hand there is a JIT compilation of 1 ms and another one of 2 ms on iteration 2 and 3 and these simulations last 1.8 ms and 2.7 ms. So it seems that the JIT compilation really impacts the main thread accordingly. Since Codingame’s system is single-threaded, the game should be suspended to execute the JIT compilation. In addition here is the result of round 2 (for a different simulation loop). 0 / 569 / 193 1 / 1722 / 193 2 / 2263 / 193 3 / 2642 / 193 4 / 3064 / 193 5 / 56230 / 194 6 / 56555 / 194 7 / 56784 / 194 8 / 57017 / 194 9 / 57262 / 194 10/ 57484 / 194 On iteration 5, we can see a JIT compilation of 1 ms which impacts the game of 53 ms and causes the timeout (we can also see that the simulation lasts 400 µs before JIT and 250 µs after). This behaviour is observable on all games. Around the round 2 or 3, there is a JIT compilation which impacts the game of several milliseconds (I don’t really know why the reported JIT compilation time does not match the delay on the simulation in this case). Edit: as mentioned in my additional comment below, this specific JIT compilation actually also matches a GC execution. Along the full game, there are usually only four or five JIT compilations which happen during the player run. They mainly appear during the first round, then there is the “very impacted” one on round 2 or 3, then there can be one or two other JIT compilation during later rounds which could delay the main thread of 1 ms. Overall the JIT compilation takes between 1 and 2 seconds. Fortunately all other JIT compilations occur when the player is waiting for the referee input (I guess the scheduler takes this opportunity to switch the threads). By the way, funny thing: I saw a game which timed out before reading any input; the “impacted” JIT compilation occured while the player was trying to read the first input… Now I hope we can find a solution to that problem. Personnaly I never worked on that specific topic and I’m not sure what we could do. My suggestions would be: - increase the maximum time of the first rounds to avoid timeouts; - use the JVM option -Xcomp (to compile everything at the first call). I would be happy to read your comments.
OPCFW_CODE
installation disc contains the files necessary to start Windows , so it is itself a boot disk. A boot disk is actually not a computer disk in the shape of a boot. If it was, most disk drives would have a difficult time reading it. Instead, a boot disk is a disk that a computer can start up or "boot" from. If a problem is preventing Windows from starting, you can use the installation CD to start Windows . The installation CD also contains Startup Repair, which you can use to repair Windows if a problem prevents it from starting correctly. Startup Repair can automatically fix many of the problems that in the past required a boot disk to fix. A boot disk (sometimes called a startup disk ) is a type of removable media, such as a floppy disk or a CD, that contains startup files that your computer can use to start Windows . CD and DVD boot disks are often used to start up a computer when the operating system on the internal hard drive won't load.The startup files are also stored on your computer's hard disk, but if those startup files become damaged, you can use the files on a boot disk to start Windows Earlier operating systems that used the FAT or FAT32 file systems, such as Windows 95 and Windows 98 , a boot disk was especially useful because it allowed a person to access files on a hard disk even if Windows was unable to start. This ability also represented a security risk, because anyone with a boot disk and access to the computer could start the computer and access any file. Hard disks formatted with NTFS have built-in security features that prevent using a boot disk to access files.. Let looks at some useful "boot CD downloads" to create one for Windows OS,s.1-Ultimate Boot CD for Windows - This BartPE-based boot disc comes with a huge selection of tools to access your data and get your PC booting properly again. Some of them are even useful.UBCD takes a long time to load and asks you some odd questions before it's finally up. But once it's there, you can edit the Windows Registry (yes, the one on the hard drive) in RegEdit, recover deleted files, and even run benchmarks. setting up UBCD is identical to creating a BartPE disc--with the same possibility of failure. But when it works, you get a lot more.Price: Free Download Ultimate Boot CD for Windows. 2-Puppy Linux - A third party application to create a boot CD using Linux and great for accessing NTFS-formatted hard drives--especially if you're not comfortable with Linux's whole mount concept. Just open the Drives window and select a drive, and Puppy will mount it for you--in read/write mode, if possible.Puppy will mounting the drive with read/write permissions and you not only can copy your files elsewhere, but you can also edit them. Puppy Linux comes with AbiWord, which supports .doc files, and Gnumeric, which supports .xls. And even if it mounts read-only, you can still copy the files to an external drive, most of which are formatted in the universally accessible FAT32 file system. But be careful how you click. Actions that take double-clicks in Windows, such as opening a file, take only one in Puppy. Download Puppy Linux. The BartPE operating system makes a pretty good boot disc on its own, getting you into Windows and letting you access your drive. It doesn't have much in the way of repair utilities, but it has chkdsk, which should probably be the first one you try.To create a CD, the program needs the Windows 2000 or XP installation files. One place you're sure to find them is an actual Windows installation CD-ROM. But the recovery disc that came with your PC probably doesn't have them. Luckily, if your PC came with XP installed (and thus, not with a true XP CD), the necessary files are probably in a folder called C:\Windows\i386. But I do mean probably , not definitely. However, since the PE Builder is free, you're not losing much if it can't create a disc. 4-Vista Recovery Disc - a unique distribution of Microsoft's own recovery tools. This Recovery Disc is basically a Vista installation disc minus the install files. It even has an "Install now" button that asks for a Product Key before failing. You're better off clicking the Repair your computer button. Among its Vista-only options are a tool for diagnosing and fixing startup problems, a version of System Restore that uses restore points on the hard drive, the restore portions of Vista's backup program, and a memory diagnostic tool. Price: Free Download Vista Recovery Disc. 5-Trinity Rescue Kit - TRK's command line interface could humble anyone but the most devoted Linux geek. you take the time to read the 46-page documentation and learn the program, you'll be rewarded next time disaster strikes. Among the tools that will be at your disposal are a script that runs 4 different malware scanners, a tool for resetting passwords, a Registry editor, a program that clones an NTFS partition to another PC over a network, a mass undeleter that tries to recover every deleted file on the drive, several tools for recovering data off a formatted or dying disk, two tools for fixing master boot record repair programs, and hardware diagnostics. Download Trinity Rescue Kit. Hope that you understand the benefits of having a boot disk on your wardrobe.
OPCFW_CODE
The version of LAME that Mac Audacity users are told to download is four years old, version 3.98.2. (Incidentally, Windows users can use a version that’s less than a year old; what’s up with that?) I installed LAME 3.99.5 through MacPorts, and Audacity didn’t recognize that it was installed. I tried setting the libmp3lame.dylib file in the preferences, and it still wouldn’t recognize it was installed. Same with FFmpeg. Is it possible to use more recent versions of LAME and FFmpeg? How, or why not? Audacity is married to the two software packages at the download site. Even better, the last time I tried to manually install lame and FFMpeg, it failed. You must use the automatic install and let it put the two packages in the directory of its choosing. If you have Mountain Lion, you may have troubles because /Library is a hidden folder now. There are posted work-arounds for that – to make /Library visible again. The reason that Audacity’s Libraries Preferences is tied to the recommended LAME and FFmpeg is that other versions of those libraries usually don’t have enough “symbols” compiled in, so the exported files are written without proper length information, with metadata missing and similar problems. You can export using arbitrary versions of LAME or FFmpeg by choosing “external program” when you export then pointing Audacity to the compiled binary of your choice - that is, to the LAME.EXE or FFmpeg.EXE on Windows (or to the “LAME” or “FFmpeg” file on Mac), not to the DLL or DYLIB file. Ah. But then what about on Unix/Linux? It seems that there, Audacity uses the same libraries that everyone else uses, rather than custom-built ones. And, again, why are Mac users stuck with a version that’s four years old? The situation is different on Unix/Linux because Audacity is not distributed as binaries on these platforms. The binary distributions are provided by the distribution maintainers, so it is their job to ensure that Audacity is built against a version of Lame that works. If you build Audacity from the source code, then you take on the responsibility for ensuring that the version of Lame that Audacity is built against works. On Mac OS X, what would you hope to gain by using Audacity with a more recent version of Lame? I’m not aware of any changes to LAME since 3.98.2 that offer any improvements for Audacity on Mac OS X. There was an issue with FhG V1.5 build 50 that ships with MS Windows, so there may be some marginal benefit in using 3.99.3 on Windows though I’m not certain that this issue affects Audacity at all as I’ve used both 3.99.3 and 3.98.2 on Windows and there is no noticeable difference. iTunes for MP3 creation which uses its own encoder rather than LAME. And it’s easy to do to MP3 or any of the compressed formats that iTunes supports including AAC. iTunes > Preferences > General > Import Settings. Select the format and quality. Drag or copy your music to iTunes. When it arrives, Control-Click the music and Create MP3 Version (or AAC version or whatever you picked). Both versions will appear and you can drag either one to wherever it needs to go. We give the usual boiler-plate warning that MP3, AAC, and other compressed formats cause music damage and you can’t ever fix it later. You can’t easily edit MP3, AAC, etc files later, either, without creating even more sound damage.
OPCFW_CODE
I needed to import the ChartModule in the module.ts file of the component that I wanted to use the chart in. I originally only had it imported in app.module.ts. For Angular 7 or greater version use npm install [email protected] --save npm install chart.js --save In the app.module.ts file, within the NgModule decorater there is an array called imports, mention ChartsModule in the imports array like this imports: [BrowserModule, RouterModule.forRoot(appRoutes),ChartsModule ], It helped me. It can be helpful to you as well Try using the attribute syntax for binding: I'm not certain if this will work in your case, but this has to do with what's a DOM property versus an attribute, and how Angular handles binding for both. - NG2-Charts Can't bind to 'datasets' since it isn't a known property of 'canvas' - Can't bind to 'data' since it isn't a known property of 'canvas' - ng2-chart with Angular 4: Can't bind to 'data' since it isn't a known property of 'canvas' - Can't bind to 'chartType' since it isn't a known property of 'canvas' with angular12 - How to bind json array data to chart.js with same canvas id? - Chart.js - bind data in dataset to specific chart labels - Is it possible to create a canvas to extract image data without actually rendering the canvas? - Angular / ng2-charts: Fetching json data in chart object showing: Cannot read property 'length' of undefined - Chart.js: get chart data from canvas - How to load chart data to an appended canvas - How to get the data attribute of the canvas chart created using chartjs - ChartJS have xAxes labels match data source - chartjs how to update dynamically data from database(Chartjs cant get the data) - Resetting transform: rotate() by removing and appending canvas not showing data after appending and redrawing chart - bind first property value of an array of object into chart.js - Calling data from outside of Chartjs code - How do I destroy/update Chart Data in this chart.js code example? - display ng2 chart when the property of data returned is >=0 - How to bind data from Controler to chartjs line chart to create it as dynamic? - getting additional value fields from data source for dx.chartjs doughnut chart - why isnt vue-chartjs receiving data from api? - Chart JS - Title missing when clearing canvas if no data is available - Bind Angular FormGroup to Chartjs data - Created an onclick function to remove data from a line chart in ChartsJs, but getting "Cannot read property 'data' of undefined" error - How to use computed property with data from JSON in data object for time axis in Vue using Chart.js - How to run Chart.js samples using source code - When updating Chart.js chart the data is updated but canvas is emptied - Use Google Spreadsheet as data source for Chart.js - Cant assign data to ng2 piechart after receiving it in HTTP response - Parsing webserver JSON Values to data property More Query from same tag - hide data of the datasets in radar chart? - Give '%' on Bar Chart Datalabel Chartjs - Not able to see values on bar chart - Chart.js Show Dollar Amount On Left Y And Percent Amount On Right Y - React component wont re-render ChartJS chart with redux props - nodejs express jade mongodb project: how to import include chart.js - Y-Axis with different colors - Chart js with ng-repeat - Chart.js setting maximum bar size of bar chart - Remove empty spaces created by null values in Chart.js - In Chart.js, how do I hide certain axis labels from a stacked bar chart? - Chartjs with Typescript: The types of 'interaction.mode' are incompatible between these types - Angular: How to change color of chartjs? - Create multiple chart by Chart.js - How to pass my array of objects into a graph with Chart.js - chart.js aspect ratio not square - Chart.js time object labels not updating correctly - Chartjs box plot with Y-axis negative values - How to create stacked bar chart using react-chartjs-2? - Chart.js and Firebase Ionic App Angular 4 - How to get Chart js labels in scientific notation - Angular ChartJs does not show Data for multiple Charts - Loop through array of objects in Node - ChartJS - Show percentage on hover (AngularJS) - Chartjs treemap example - Set different color for each bars using ChartJS in Angular 4 - Chart js size and placement - ChartJS : How to leave just points without lines - React-Chart-JS-2 ^3.0.5 | TypeError: Cannot read properties of undefined (reading 'visible') at Chart._getSortedDatasetMetas
OPCFW_CODE
WordPress clogs up Apache OK, I hate to be the helpless Apache noob, but I am feeling stumped here. All of a sudden last night, our WordPress site went down. I rebooted it and watched for a couple of minutes and it seemed all right, so I left it alone. Then I wake up and find it's down again. After a little investigation, I've discovered that, despite only getting 20 or so requests per minute at the time, Apache keeps forking a new instance for just about every request until it hits MaxClients, and then the instances just sit there doing nothing. Literally 0.1% CPU utilization for the whole system at that point. If I log into MySQL and look at the process list, I can see a corresponding database connection for each httpd, so it looks like the scripts are never ending. But if I request a static file or even a simple "Hello world" PHP file before it reaches MaxClients, that request will go through fine. I'm really at a loss as to even what to look at, because nobody else here has the technical sophistication to SSH into the box or even install plugins, and I know I haven't touched it in days at least — so I don't even know what could have changed to cause the problem. The setup is Apache 2.2.3/prefork with mod_php 5.2.6. Here are the obviously relevant settings (let me know if you any to know anything else): httpd.conf Timeout 20 KeepAlive Off <IfModule prefork.c> StartServers 2 MinSpareServers 1 MaxSpareServers 3 MaxClients 50 MaxRequestsPerChild 2000 </IfModule> php.ini max_execution_time = 600 ; Set so high for large file uploads max_input_time = 600 ; Set so high for large file uploads memory_limit = 128M ; Set so high for large file uploads log_errors = On A few things I've tried: Upping MaxClients This just resulted in Apache eating up all 1.6 GB of RAM and then doing the same thing as before Cutting max_execution_time and max_input_time to 15 and memory_limit to 32M Made no difference — the httpd instances were still immortal Reinstalling WordPress No difference at all tail -fing error_log No errors reported aside from reaching MaxClients tail -fing access_log to see if we're being DDOSed or something Traffic was indeed pretty low while this was happening I feel like I must be missing something right in front of my face, but for the life of me I cannot figure out what is going wrong here. So I'm hoping someone a little more experienced in sysadminning has seen whatever I'm doing wrong before. Did it starting swapping to a standstill? What plugins for WP are you using? Some attempt to increase the allowed memory for PHP (especially those that have graphics functionality). Though the memory is not always committed, it may lead to a situation where the system is swapping to a standstill. Your php scripts in Apache suddenly became very slow. You need to find out what is your bottleneck. Since you have no high cpu usage - it can be swapping, slow disk or network IO, slow database server, slow dns requests - and many other reasons. First check all logs for suspicious errors. If you still have no idea - you can try to strace apache process to see which call is slow as a start to debugging. Ok sorry forthe keepalives question.. What does "show processlist" says on the mysql server? Typically a locked/corrupted table would explain the issue. Keepalives ARE set to "off" - look at the httpd.conf
STACK_EXCHANGE
The Windows computer is an all time favorite, it is one of the easiest operating system ever made. Some tips are illustrated below which will help you in a long run. 1. Use an antivirus with less resource usage: Using computers becomes easier when speed is up to the maximum. While using a windows computer, make sure to use an antivirus which uses less RAM and processor speed in real time. This will make your boot time and machine run time faster. 2. Disable services not required: You need to disable the services that you do not use in windows computer, in order to increase its speed. Services like spool server (used for Printing services), Windows Server (required in Server computers), and other non-critical services can be disabled for better machine speed. 3. Lower the screen resolution for better performance: Higher the screen resolution more is the CPU usage in windows computer. Therefore, if you lower the screen resolution by a little (Just 1 level down), then you are bound to get better speed from the multi-tasking, that you do while using the system. 4. Use a registry cleaner to delete registry errors: Registry is used in each and every operation performed within a computer. Therefore, due to illegal operations or access violation etc. the Registry might generate errors. Therefore, use registry cleaner software in windows computer to keep your machine faster and more reliable. 5. Using security software for preventing intrusions: An often illegal intrusion made in to a windows computer remains undetected. Therefore, use a HIPS (Host intrusion Prevention Service) based firewall, which can prevent both intrusion as well as illegal programs from within the windows computer, to gain access to the internet. 6. Use temporary file cleaning software: When we use a browser within our system it often gets clogged with cookies. Various other temp files remain as it is within the windows computer, even after its use is over. Therefore, use a temp file cleaner to delete these temp files and cookies to save some space within the system. 7. Using a malware cleaner: Often, various types of malware like adware or spyware remains non-detected by the commercial antivirus software within the system. So use a malware cleaner, which have powerful cleaning and detecting capacity in a windows computer to keep it running faster and better. 8. Disabling startup programs not required: There are various programs in a windows computer, which enters its name in the registry for auto startup. You can disable the programs which are not required at boot time or as soon as the machine starts up. This will enhance the speed of the machine a lot. 9. Disabling Windows themes to increase the speed: The attractive Windows themes that we see in a windows computer also consume a lot of CPU usage and RAM. Therefore, if we are more concerned with speed rather than looks then we can easily disable Windows theme, and use the Classic view of Windows for better speed. 10. Disabling Administrative Shares and Quota Management: You can disable the administrative shares and various quota related space in hard drives in a windows computer, if you want to make the boot up faster than usual.
OPCFW_CODE
Hire Remote Swift Developers to Build Unique iOS Apps On-demand Ninja has a team of highly skilled Swift developers who are well-versed in various technologies. Our dedicated Swift developers can turn your app idea into scalable, secure, and robust native iOS applications. You can hire Swift experts from us with full onboarding, administration, infrastructure, payroll, and compliance support to help you hire the right people faster On-demand Ninja is the Best Place to Hire Swift Developers Hire skilled Swift developers from an iOS app development team. Our dedicated developers can help you build a versatile, high-performing application using cutting-edge technology. We create applications that combine design, functionality, and scalability. Get the best Swift app development services from an On-demand Ninja team with a lot of experience. Our team of top 1% experts has reached new heights of success through hard work and dedication. We have occasionally been inspired to accomplish various milestones by their extensive expertise in their fields. With their top-notch services and ongoing efforts, our empowered team constantly strives for better outcomes. We are a global hub for web and mobile app development that lets you hire highly skilled Swift developers for your Swift iOS development project hourly, full-time, or part-time. Applications geared toward iOS and OS X are developed and maintained by our Swift developers. Hire Skilled in Experienced Swift Developers in Three Simple Steps You can get high-quality Swift app development services by hiring Swift developers. For the development of iOS apps, we have been employing the agile development approach. It helps to break down complicated issues into smaller, more manageable pieces. 1. Requirement Collection: Gathering requirements is the first step in the development process. Our team develops a list of questions to comprehend customers’ technical requirements. We also begin our planning at this point. When you hire Swift developers from our team, you can be sure of receiving high-quality service. Choose the Best Swift Developers Choose Swift developers from our team of more than 100. You can choose who joins your team by interviewing and narrowing down candidates from our in-house tech pool. 3. Start Work with the Team: Our developers have now joined your team. They put in a lot of effort for you, and you can talk to them directly. You can begin assigning them responsibilities and receive daily updates and a timesheet. - How to Hire Swift Developers Online? - Defining Swift - What is iOS App Development using Swift? - What is the Salary of a Swift Developer? How to Hire Swift Developers Online? Swift developers should have the necessary soft skills and the aforementioned skills. They include the capacity to collaborate effectively with others, the capacity to effectively convey their concepts to clients and coworkers, an innovative outlook, patience, and dedication to the task at hand. Pick a developer with these characteristics, and they will be a decent expansion to your group. It is more complicated to hire a Swift developer who is dedicated. However, if you adhere to certain rules, you will have a sufficient understanding of how to hire a Swift developer. All of this results in a high-quality product at an affordable cost. Keep in mind that both soft and hard skills directly affect the quality of Swift app developers. Remember to value the technical experts' capacity for communication and teamwork. We at On-demand Ninja provide high-quality services to establish long-term relationships with our customers. Utilizing other cutting-edge technologies, we create Swift applications agilely to boost your company's productivity and functionality. Apple developed the open-source, general-purpose programming language known as Swift. Python's influence makes the language quick and easy to understand. Native iOS and macOS development are the primary uses of Swift. Swift is used to writing a lot of well-known apps like WordPress, Lyft, and LinkedIn. Swift can be used for more than just making apps for Apple products; it was made to be a language that could be used for anything. Swift has been compatible with the Linux operating system since version 2.2 was released in 2016 and will be available for Windows in 2020, following the release of version 5.3. It is now a programming language that works on all platforms and can be used with the top three operating systems. As a result, Swift is being used to create web services and even web applications, and developers may discover additional applications for it in the future. What is iOS App Development using Swift? Developers thoroughly understand how iOS functions on various Apple devices, including the iPhone, iPad, and Apple Watch. Apps for iOS are created, tested, and refined by iOS developers to meet the requirements of their customers. iOS app development is typically carried out in a couple of languages: Swift or Objective-C: In the following sections, you can find out more about the software products that iOS developers create. For example, iOS developers can work for a company directly, independently, or both. Although work-from-home options vary from company to company, many iOS developers work remotely. Because most businesses rely on applications for customer relations or product support, iOS developers can work in various industries. However, the top industries that employ iOS developers are retail, technology, and finance. What is the Salary of a Swift Developer? Due to Apple's rapid expansion and evolution and a growing desire for new information and topics, a new generation of iOS enthusiasts and novice iOS developers has emerged. The demand for iOS developers and engineering iOS developer positions has skyrocketed as more and more Apple products, and services become ingrained in people's daily lives. An iOS app developer typically earns $97,875 annually in the United States. The following are some of the things that can affect an iOS developer's pay: What is the salary of Shopify web designers? - And current demand is all factors One of the biggest players in the smartphone industry, Apple has a large and loyal customer base that is expanding rapidly and will continue to do so. However, iOS developers will have a very bright future, and this market will grow greatly in the coming years. Exclusive Perks of Using Swift for iOS App Development Swift has demonstrated itself as a programming language that is smarter and capable of establishing a connection between iOS app developers, brands, and end users. For your next mobile project, Swift outperforms Objective-C in six fundamental ways Swift unifies the language in a way that Objective-C never did with memory management. Over the entire procedural and object-oriented code paths, Automatic Reference Counting (ARC) is supported to its fullest extent. Sorted Code Structure Swift’s Concise Code Structure supports string interpolation, which lets programmers insert variables directly inline to a user-facing string like a label or button title without remembering tokens. Swift is Really Fast Swift code performance demonstrates Apple’s commitment to increasing the speed at which app logic can be executed in Swift. Swift’s clean syntax, which makes it easier to read and write, is its most significant advantage. Compared to Objective-C, the number of lines of code required to implement an option in Swift is significantly lower. Speed Swift also offers several speed advantages during development, which saves money. For instance, Python’s implementation of a complex object sort will run 3.9 times faster than Python’s. In 2015, Swift became open-source, enabling the language to be used on various platforms and for backend infrastructure.
OPCFW_CODE
The latest free template release, Variant Note, includes three .PSD files (Adobe Photoshop format) that allows the images in the template to be customized in an easy way if you use Adobe Photoshop or any other image editor that supports the .PSD file format. In order to help you get started with the template, I wanted to write a short explanation about the included files and how they are used. The three files included with the template are found in the “psd” folder: variant-note-background.psd contains the header background, a gray box with a gradient and an outer glow. The box is 1015 pixels wide and centered in the document. It has two layers, where the first is a plain white background layer without any effects. The second is the main layer with the header box, and it has a number of different effects applied to a simple rectangular shape. This box can be replaced with a full-size header image if wanted, but you can also edit each individual effect to give the template a unique look. If you change the body background color in the variant-note.css file, you should make sure to change the color of the background layer in this file as well. Once edited, export the image as a .jpg (.png works as well if you want to use transparency, but the file size will likely be much bigger). variant-note-front.psd is the sample image in the content of the index.html file. It shows a heavily edited photo of a brick wall in a Stockholm metro station (Hornstull, for those of you who like useless trivia). The photo has been color-matched to the link color of the template, and split into three parts by simply deleting parts of the image. As with the header background, it has a white background color layer and an actual image layer. The image layer has one effect added to it, a one-pixel inner glow that matches the gray lines of the header elements in the template design. If you want to create a similar image but with your own photo, you can use the colors in the image layer and the gradient map feature of Adobe Photoshop to create a similar look. Add the inner glow to your own image layer, and you will get the border to the image. variant-note-logo.psd is the small box with rounded corners in the header. In the template, it is a .png file with a transparent background. This file works in a similar way as the one used in the Replacing the transparent logo in Basic Landing tutorial, so I recommend that you read that post if you want to make an own logo. This file has two layers, one for the background with the rounded corners and one for the sample logotype shape. Each layer has a set of effects, where the logotype layer has a color overlay that matches the link color (color value #9e1616). As long as the logo image is exported as a .png with transparency, there is no need to adjust any background color. Put your own logotype in a separate layer, use the existing rounded-corner box as a background – and you get a site icon that matches the template design. Variant Note uses a @font-face kit from Font Squirrel to give the template a more exciting look. To learn more about custom fonts, I recommend this post. There are also some text shadows used. For more reading about text shadows in CSS, see this post. If you have any questions, comments, requests or ideas about Variant Note, post a comment to this entry. Most new templates generate a number of e-mails with questions, and I would be happy to make an alternate version along with a how-to guide if anyone out there has a good idea for any exciting modification. This article was written by Andreas Viklund Web designer, writer and the creative engine behind this website. Author of most of the free website templates, along with some of the WordPress themes.
OPCFW_CODE
Monitoring Remote Sites with Traefik and PrometheusPublished on Partial screenshot of a traefik metric dashboard I have several sites deployed on VPSs like DigitalOcean that have been dockerized and are reverse proxied by traefik so they don’t have to worry about Let’s Encrypt, https redirection, etc. Until recently I had very little insight into these sites and infrastructure. I couldn’t answer basic questions like: - How many requests is each site handling - What are the response times for each site - Is the box over / underprovisioned For someone who has repeatedly blogged about metrics and observability (here, here, here, here, and here) – this gap was definitely a sore spot for me. I sowed the gap shut with traefix static routes, prometheus metrics, basic authentication, and Let’s Encrypt. Do note that this article assumes you already having a working setup with traefik and let’s encrypt. Exposing Traefik Metrics Traefik keeps count of how many requests each backend has handled and the duration of these requests. Traefik exposes several possibilities for exporting these metrics, but only one of them is a pull model, which is prometheus. A pull model is ideal here as it allows one to be able to track metrics from a laptop without any errors on the server side, and one can more easily tell if a service has malfunctioned. So we’ll allow metrics to be exposed with a modification to our traefik.toml If traefik is running inside a docker container (in my case, docker compose) the default api port needs to be exposed. Now once traefik starts, we can retrieve metrics from http://<server-ip>:8080/metrics. Three things wrong: - Metrics broadcast to the entire internet. We need to lock this down to only authenticated individuals. - Metrics served over http so others can snoop on the metrics. - Typically I find it preferable to lock traffic down to as few ports as possible. We fix these issues by binding the listening address to localhost, and reverse proxying through a traefik frontend that forces basic authentication and TLS. To bind port 8080 so it only listens locally, update the docker compose file Now let’s proxy traffic through a basic auth TLS frontend. First, create a prometheus user with a password of “mypassword” encoded with bcrypt using htpasswd (installed through apache2-utils on Ubuntu): Potential performance issues with constantly authenticating using bcrypt can be mitigated with SHA1. Though in practice CPU usage is less than 1%. Next, we configure traefik using the file directive. This basically configures traefik with static routes – think nginx or apache. While not explicitly mentioned anywhere, one can configure traefik with as many route providers as necessary (in this case, docker and file). A nice feature is that the file provider can delegate to a separate file that can be watched so traefik doesn’t need to be restarted on config change. For the sake of this article, I’m keeping everything in one file. Add the below to traefik.toml. It’ll listen for metrics.myapp.example.com, only allow those who authenticate as prometheus, and then forward the request to our traefik metrics. Note that this relies on Let’s Encrypt working, as metrics.myapp.example.com will automatically be assigned a cert. Pretty neat! Exposing System Metrics We now have a slew of metrics giving insights into the number of requests, response times, etc, but we’re missing system metrics to know if we’ve over or under provisioned the box. That’s where node_exporter comes into play. It’s analogous to collectd, telegraf, and diamond, but it is geared for prometheus’s pull model. While one can use the prometheus-node-exporter apt package, I opted to install node_exporter’s latest version right from Github Releases. I avoided installing via docker as the readme states “It’s not recommended to deploy it as a Docker container because it requires access to the host system”. No major harm done, the node_exporter package is a single executable. Executing it reveals some work to do. - Plain http - Internet accessible port 9100 The official standpoint is that anything requiring auth or security should use a reverse proxy. That’s fine, good to know. Downloading just an executable isn’t installing it, so let’s do that now before we forget. This is for systemd, so steps may vary. Now we’ll add authentication and security. Node Exporter through Traefik Now that we have multiple metric exporters on the box, it may seem tempting to look for a solution that aggregates both exporters so we only need to configure prometheus to scrape one endpoint. One agent to rule them all goes into depth as to why that’d be a bad idea, but the gist is operational bottlenecks, lack of isolation, and bottom up configuration (instead of top down like prometheus prefers). Ok two endpoints are needed for our two metric exporters. We could split our node.metrics.myapp.example.com (phew!). This is a fine approach, but I’m going let the Let’s Encrypt servers have a breather and only work with metrics.myapp.example.com. We’ll have /traefik route to /metrics on our traefik server and traefik.toml config with commentary following: The backend for internal-node is “172.17.0.1” and not “127.0.0.1”. 172.17.0.1 is the ip address of the docker bridge. This interface links containers with each other and the outside world. If we had used 127.0.0.1 that would be traffic local to traefik inside it’s container (which node-exporter is not inside, it resides on the host system). The bridge ip address can be found by executing: We now can update the node-exporter to listen to internal traffic only on 172.17.0.1. - One can’t override the backend in a frontend route based on path, so two frontends are created. This means that the config turned out a bit more verbose. - This necessitates another entry for basic auth. In this example I copied and pasted, but one could generate a new bcrypt hash with the same password or use a different password - Each frontend rule uses the ReplacePathmodifier to change the path to /metricsso something like /node-exportergets translated to - I prefix each frontend and backend endpoint with “internal” so that these can be excluded in prometheus queries with a simple regex. Now it’s time for prometheus to actually scrape these metrics. The prometheus config can remain pretty bland, if only a bit redundant. And finally Grafana I haven’t quite got the hang of promQL, so I’d recommend importing these two dashboards: Play with and tweak as needed. Definitely a lot of pros with this experience - Traefik contains metrics and only needed configuration to expose - Traefik can reverse proxy through static routes in a config file. I was worried I’d have to setup basic auth and tls through nginx, apache, or something like ghost tunnel, which I’m unfamiliar with. I love nginx, but I love managing fewer services more. - Installing node_exporter was a 5 minute ordeal - Pre-existing Grafana dashboards for node_exporter and traefik - A metric pull model sits more comfortably with me, as the alternative would be to push the metrics to my home server where I house databases such as graphite and timescale. Pushing data to my home network is a lovely thought, but one I don’t want to depend on right now. In fact, it was such a pleasant experience that even for boxes where I don’t host web sites, I’ll be installing this traefik setup for node_exporter.
OPCFW_CODE
Novel–The Legendary Mechanic–The Legendary Mechanic Chapter 1068 – Extortion temper hushed Nevertheless, this will not be the fact in the foreseeable future. The situation from the united leading was could possibly not circ.you.mvent the affiliation of every Beyond Quality A’s allied faction. This is the shackles made by the 3 Worldwide Civilizations, plus they would not enable the Beyond Grade As to take out it. sanders of the river Whilst the Beyond Grade As had identical unions, there got never been a union for these range, similar to the Mercenary a.s.sociation, Pharmacologist a.s.sociation, or even the Superheroes a.s.sociation. While identity with this enterprise was known as the Progression Union, on the eye of critical persons, this would be an a.s.sociation that gathered most of the Beyond Standard As with the regarded world. “Flickering Society Beyond Level A Creation Union?” “There’s now a challenge in our prior offer.” “What are you referring to? The Flickering Environment Beyond Quality A Advancement Union is really a non-governmental corporation that seeks to deal with the contradictions relating to the Beyond Class A corporations active in the increase of the Flickering Community. It’s always easier to get rid of problems through negotiation in lieu of assault and jointly take care of the public security and safety in the Flickering Community. This is a good issue that could benefit the total Celebrity Industry, which is also a notion I formulated following experiencing the former incident. I am nervous that there might be a lot more Beyond Class As designed to fall into my fingers.” Han Xiao spoke by having an concept like he totally thought within his sits. “Alright, ignore the performing component, why am I even the main identity?” Feidin was speechless. “And my purpose is you‽” “It’s just becoming a superstar. You’re back in your outdated lines. Furthermore, the army wants you. You can’t possibly reject this.” Han Xiao chuckled. Solution Become an expert in had no anger toward Han Xiao s.n.a.t.c.hing his lollipop. Naturally, as part of his view, Black color Star failed to be aware of their confidential financial transaction and failed to specifically targeted him. “I learned anything. You actually want me to be and become an actor.” Feidin got a reconciled expression. “Couldn’t you have mentioned it with me in advance?” “… Alright then. Let us carry on using the package when you’re completed.” “I identified almost everything. You truly want me to be and remain an actor.” Feidin had a reconciled expression. “Couldn’t you may have talked about it with me in advance?” “The dilemma is so big that I’m speechless. The dynasty will never acknowledge.” Sylvia had an manifestation of helplessness. “To boldly unite each of the Beyond Grade A businesses, is not this bringing the initiative to rage along with the three Standard Societies?” Rebirth Of The Female Antagonist: XUANRONG She have up and opened up the doorway, only to find that Feidin was patiently waiting outdoors. The 2 main of them pa.s.sed the other. “This man is full of mystery.” Han Xiao shook his top of your head. This has been all conjecture, and maybe Sorokin got one more reason to do so. “Sigh, it is indeed this way. I could actually create a new significant shareholder chair, but I have to decrease the collateral from the some others, which is a lot more frustrating. It will require lots of time to do some business treatments and complete the investment capital enhance. Normally, additional Beyond Level A shareholders will never be content.” Sorokin could not remove his reveal, if not it may be hurtful if he lost management of the whole financial party. Han Xiao persuaded Feidin a while longer just before mailing this doubtful person out of. Han Xiao located his legs on the kitchen table, bringing up his eyebrows while he responded, “Any difficulties?” It absolutely was simply the initial step for him and Sorokin to visit a popular opinion. He still were required to document it along with the three Common Civilizations regarding this topic regarding their help and support. Because the Evolutionary Totems, that was not just a difficult job. Ability to hear this, Han Xiao discontinued his scholarly act, moving his eyeballs. Han Xiao laughed to him or her self. well before waving his fretting hand. “Alright, I will not joke any further. The main reason is simply because not one person dared behave as me and bear the consequences right after. You are an individual in close proximity to be who once was a star, therefore the director established it like this.” Han Xiao suddenly enjoyed a eureka minute and hurriedly recited a number of words from your socialist mantras before deriving an innovative notion. “You suggest Black colored Celebrity stole my booked shareholder seating?” Solution Master’s phrase evolved. “I guaranteed to go out of you among the shareholder jobs, but Black Superstar came up over just now and made use of his existing impact to endanger me. Additionally you know, using the sale on the Evolutionary Totems, the 3 Universal Cultures will surely service him. Consequently, to lessen my loss, I really could only consent to his obtain.” He was distinct that after a very organization was developed, the Beyond Class As would not anymore are present as a holder of loosened yellow sand. The three Universal Societies would also come to be cautious, but at this moment, this corporation only supported to assist telecommunications, absolutely nothing really important. Therefore, whilst the three Common Societies would experience anxious, they would not have any measures. Equally Manison and Kasuyi, just one director along with the other the actor, failed to go to encourage him after the event. No matter what that they had accomplished well before, not less than he owed them a big favor during the Hila Rescue intention, so he still needed to put in some time and effort to take care of a united entrance. “Having such non-governmental institutions will never assemble the three Universal Civilizations confident. They should inevitably request for their own personnel to settle in. By that time…” Sylvia interjected. who is the leader of the uchiha He was very clear that after a very organization was developed, the Beyond Quality As would will no longer can be found being a tray of shed yellow sand. The 3 Universal Societies would also become careful, but at this time, this organization only served to support communication, practically nothing really significant. As a result, even though the three Worldwide Societies would really feel apprehensive, they would not take any activity. “Alright, forget about the acting element, why am I even the key personality?” Feidin was speechless. “And my position is you‽” It had been exactly the initial step for him and Sorokin to visit a agreement. He still were forced to submit it using the three Widespread Civilizations in regards to this make a difference regarding their support. On account of the Evolutionary Totems, this has been not really a complicated job. Underneath regular circ.you.mstances, this kind of significant dividends would stop taken free of charge. Every shareholder position offered by the Unlimited Monetary Party was popular, and they would type another enterprise cohesiveness with all the Beyond Standard A institutions, mutually benefitting both parties. Under typical circ.u.mstances, such enormous dividends would stop used free of charge. Each shareholder posture offered by the Countless Financial Group of people was desired, and they would variety another company assistance with all the Beyond Standard A companies, mutually benefitting both sides. the world great books volume 56 “But it is been such a long time since I final acted. Won’t this be too unusual in my opinion?” Feidin was still hesitating. This guy had not been like the other Beyond Standard As, who counted themselves personal capability to influence their organization’s progression. He acquired little or no record really worth mentioning, along with his company was work via the standard and appropriate way. Sylvia still got a concerned manifestation. “To have our Dark Superstar Army function as pioneer, won’t this be too eye-getting? Particularly your eyes on the dynasty…” “… Alright then. Let’s move forward along with the deal when you’re done.” Novel–The Legendary Mechanic–The Legendary Mechanic
OPCFW_CODE
There are 12 repositories under m1 topic. 🖥 Control your display's brightness & volume on your Mac as if it was a native Apple Display. Use Apple Keyboard keys or custom shortcuts. Shows the native macOS OSDs. Unlock your displays on your Mac! Smooth scaling, HiDPI unlock, XDR/HDR extra brightness upscale, DDC, brightness and dimming, dummy displays, PIP and lots more! 🦾 A list of reported app support for Apple Silicon as well as Apple M2 and M1 Ultra Macs A lightweight JIT compiler based on MIR (Medium Internal Representation) and C11 JIT compiler and interpreter based on MIR An introduction to ARM64 assembly on Apple Silicon Macs Awesome - JingOS - The World’s First Linux-based OS design for Tablets Ad Blocker App for iOS, macOS A set of utilities (vmcli + vmctl) for macOS Virtualization.framework 📦 A familiar Minecraft Launcher with native support for macOS arm64 (M1) Apple Silicon Guide. Learn all about the M1, M1 Pro, M1 Max, M1 Ultra, and M2 chips. Personal App that turned into "alpha released app" v2 appdecrypt is a tool to make decrypt application encrypted binaries on macOS when SIP-enabled Mac OS Status Bar App that puts at eyesight your AirPods battery levels. Universal Intel / M1 Compatible Install ioquake3 on macos in one command (M1 native support) Create virtual machines and run Linux-based operating systems in Go using Apple Virtualization.framework. TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Memory modification tool for re-signed ipa supports iOS apps running on iPhone and Apple Silicon Mac without jailbreaking. macOS virtualization app for M1/Apple Silicon List of the working shaders on Apple Silicon Macs. React Native 0.70.0rc ⚡ M1/M2, Ubuntu 💻 Hermes ⚙️ Fabric 🚄 Turbo Modules 💨 TypeScript 4.8b ✔️ Gradle 7.5, JDK 18, NDK 25, ndk-build (CMake 🚧) 📓 Storybook 6b +addons 🍎 Xcode 14b, Monterey 13b 🧩 Yarn 3.2 (Turborepo 🚧) ESLint ✔️ Prettier ✨ Metro 📦 Re.Pack 🚧 Bazel, Buck2 🚧 Babel 🗼 SWC 🚧 cljs 🚧 for curious early adopters :suspect: IDA loader for Apple's 64 bits iBoot, SecureROM and AVPBooter Dockerize your PHP apps ;) Install homebrew in native mode on Apple MacOS ARM (M1) The portable version of JetBrains profiler self API for .NET Framework / .NET Core / .NET / .NET Standard Control your display's brightness from the macOS menu bar. Simple and easy to use. Computer setup and settings. Apple Silicon ready. Boilerplate for GPU-Accelerated TensorFlow and PyTorch code on M1 Macbook A tap with patched Python 2 formula for Apple Silicon (M1/M2) Macs. The ultimate list of iOS device models - Identify model for iPhone, iPad, iPod touch, Apple Watch, Apple TV, and Mac computers with Apple Silicon. Vosk ASR Docker images with GPU for Jetson boards, PCs, M1 laptops and GPC A miner optimized for Apple Silicon M series processors. FinderFix lets you resize and reposition Finder windows to your liking.
OPCFW_CODE
The Single Best Strategy To Use For programming project help We've been entirely dedicated to your requirements, your programming help will probably be completed by certified industry experts at your activity level high school by means of Masters diploma stages, and they are even accomplished As outlined by your unique needs. Programming homework should superior be left for the authorities, in which you know you can get the most effective programming aid from an expert in your field. We sieve by means of all our done assignments thrice making sure that plagiarism of any variety won't escape us. Turnitin will be the special Instrument with which we do all our plagiarism Look at. R is definitely an open resource programming ecosystem and language which was notably founded for generating statistical applications for computing and Visible analysis. There are a variety of statistical methods and strategies for Investigation which include Time Series Examination, hypothesis screening, warehousing and mining of data, clustering, and so forth in R programming ecosystem which The scholars have to discover for efficient utilization of R. I couldn't discover a way to finish my web design project utilizing ASP.Web. Soon after a number of times of battle, I gave up and started searching for programming help online. Audio chat courses or VoIP software package can be helpful once the screen sharing application doesn't give two-way audio capacity. Utilization of headsets retain the programmers' arms free Acquire reliable and trustworthy programming help from TFTH, the top assignment crafting service globally. We offer our help starting up at only $ 10 for each website page for an assignment. Assignment4u strive to offer the programming important link assignment help to pupils who battle to accomplish projects connected to assembly programming. The students could have the whole guidance of assignment4u and will seriously rely on this platform. JAVA programming language is the preferred commonly utilised programming language resulting from its major Advantages. In this article our JAVA programming assignment help industry experts list the advantages of JAVA programming language: As Now we have stated before our programming assignment writers are really educated gurus who know how to cope with different difficulties that your professors throw your way. The review Main technical components and proper Evaluation of diverse provided Related Site procedures and tools are complicated for college students and therefore have to have qualified r programming assignment help. The guidance crew should be able to manual The scholars to make sure that the most beneficial experience can more be generated when completion of assignments are in issue. Almost any complaints or requests that are made is going to be specified the utmost of all priorities making sure that pupils tend not to truly feel omitted. We assure you the options we deliver are accurate and very well-published. It will have a mark on your professor and make you your desired grade. The only method to measure the scale of the method should be to rely the lines. Here is the oldest and most generally utilized size metric. We know that many of the moments college students assignments have specific deadlines for submission. For that reason, we usually be sure that we're up to the process in providing our services to any college student by the due date, each time!
OPCFW_CODE
This is The very first time I've taken assignment help online And that i don't regret some it. It absolutely was a beautiful knowledge with MyAssignmenthelp.com. They ended up fantastic. They supply the paper ideal in time. And the caliber of the written content was ok to have me an A. Truly joyful! Thanks Fellas! Although, there are several explanations for selecting our online assignment help support, but still there could be specific doubts relating to the standard of the operate we produce. Experience many of the samples of different assignments like essay, dissertation, circumstance scientific tests and so forth. created by our pro writers and know how we proceed While using the paper. Now we have a sizable clientele that may be distribute all around the world. Subsequently, our dollars inflow is sufficient for our group customers to become compensated adequately despite our acceptable selling prices. Young brides and grooms who would like to prepare for that Exclusive working day thoroughly fail some assignments (as well as exams), remaining beneath the constant pressure. If some academics don’t see it as a legitimate purpose (when compared to the circumstances described previously), we do! We are aware that students normally have to view their time although finishing their tutorial assignments for the reason that most of high school, university and College mentors refuse to simply accept academic assignments delivered outside of the deadline. Big selection of expert services. We propose composing perform of any subject matter, executed by superior capable specialists. We offer perform of various levels – for highschool, faculty or university students relating to to diverse topics. Issues With all the Task Management: “I simply cannot deal with to complete my homework assignments in time, so I normally want to rush up crafting papers. That features a negative impact on the standard of my paper writing” Possessing too much items to perform, you can certainly forget about many of the points. My author at Doanassignment pop over to this site is click reference my very first assist. I requested her to carry out my investigate paper and helped her with a few facts which i had. 29-Nov-2018 Maryam, UAE Excellent occupation inside the organisational behaviour scenario study Seriously I'd suggest your providers to my buddies. I'm really thankful to your workforce for completing such an excellent organisational conduct circumstance study producing for me. I registered my assignment request on this website and acquired the solved paper two days ahead of the deadline which is simply incredible. Received A+ within the paper. Thanks MyAssignmenthelp.com for supplying these types of an exact solution. . It of course considerations college students that have small little ones. Parenting requires Nearly all your time and effort, so a little assignment help will not likely hurt any individual. This team advice also consist of individuals who will need to deal with elderly family members. In addition to, we've been happy to declare that We've helped a great deal of those people who are going to get married. College students normally confront issues with homework, and whenever they get bored by trying to find answers to all of the concerns, they begin in search of help. Are you interested in to uncover someone who could have a great comprehension of your complications and who'll create the do the job making sure that it displays all of your current paper’s sides? The many do the job need to be Utilized in accordance with click to find out more the suitable guidelines and applicable regulations. We have been applying Google Analytics to improve your expertise. No own facts is currently being tracked. We take Adhering to the whole process of accomplishing the study, all of the collected facts is set collectively. This is certainly finished by pursuing the paper composition that is needed for that web link assignment. Our gurus prioritize the paper framing as that is certainly what will make an assignment presentable and also readable.
OPCFW_CODE
Action command Update now support SaveAndContinue argument in Touch UI with the release 184.108.40.206. This special argument value changes the behavior of the form after successful completion of Update command on the server. The form will not return to the previous view and remain in edit mode. It will retrieve the server values and display them to the user. Any visible DataView fields will be refreshed. User can continue editing the record. Any server-side changes to the master and detail rows are visible to the user. We will incorporate Update/SaveAndContinue action in the "Form" scope in the new projects by default . The new button will be displayed between Save and Cancel buttons in edit mode. Define this action explicitly to have it in your apps today. The forms will also start displaying Next and Previous navigation buttons that will perform Update/SaveAndContinue as needed. You can try the hidden "Next" form navigation feature in "read" mode today by pressing Right Arrow Release 220.127.116.11 introduces the following features and bug fixes: - (Touch UI) Action with command Update and argument SaveAndContinue will not close the form after successful execution. The form will stay in the current state and refresh the current row with the server data. The form will also sync the child data views. - (App Gen) New command line option -DataModel changes behavior of -generate and -refresh commands. App generator will not produce the source code when the option is specified. It also resets the data controllers. If the data models are changed manually or via automated scripts then the next execute of -generate command will incorporate the model changes into the app. - (Framework) New method ActionArgs.AddValues allows adding values to the Values array. - (Framework) Method SqlStatement.Configure accepts DbCommand parameter. It allows configuration of command properties such as CommandTimeout to be applied to the entire application framework. Create a partial class SqlStatement in Data namespace to override the method. - (Framework) Server-side API _invoke allows specifying additional path/query information. - (Framework) OAuth access_token and refresh_token must be non-blank to be written to OAuth configuration. Refreshing of tokens will not cause loss of token. - (Framework) Site Content now supports ModifiedDate and CreatedDate to allow date-driven manipulation of content in the upcoming Content Add-On. - (App Gen) Prevented interruption of Project Designer operation when exception "Unable to initialize native support external to the web worker process..." raised while trying to access HTTP cache. The error seems to be have its source in ASP.NET 4.8 HTTP Activation feature. - (Touch UI) Apps based on Touch UI do not specify data-show-modal-pages attribute on "div" elements representing data views. This setting applies only if Classic UI is also supported. - (App Gen) File web.config is correctly processed with "regex" expressions when created for the first time.
OPCFW_CODE
My c++ Crashes When Calling My Enqueue Object Function I have a program that performs discrete time event simulation. I have modeled this by using a linked list that I use to enqueue the next events. The main logic of the code is inside a while loop that runs for some n amount of things completed. When this number is low enough, it will finish the loop. However, with enough iterations, it will eventually stop working. I placed various debug statements to see where the issue was, and it was within my enqueueEvents() function, a function that inserts an object into my linked list sorted by time, head being the soonest, and the tail being the latest. The program outputs ERROR POINT J, but not ERROR POINT K, leading me to suspect that something is wrong with the if((head == NULL) || (head->event_arrival_time > tempNode->event_arrival_time)). But I cannot think of what in C++ would make the program simply stop. I'm not looking for someone to simply fix my issue, but would love some insight on why this might be happening. CODE: void EventHandler::enqueueEvent(Process p_input, int type, float event_arrival_time) { if (debug_mode) {cout << "ERROR POINT I" << endl;} Node* tempNode = new Node(p_input, type, event_arrival_time); Node* current = head; //Iterable Node if (debug_mode) {cout << "ERROR POINT J" << endl;} if((head == NULL) || (head->event_arrival_time > tempNode->event_arrival_time)) { if (debug_mode) {cout << "ERROR POINT K" << endl;} tempNode->next = head; head = tempNode; tail = tempNode; return; } while (current->next != NULL && current->next->event_arrival_time <= tempNode>event_arrival_time) { current = current->next; if (debug_mode) {cout << "ERROR POINT L" << endl;} } tempNode->next = current->next; current->next = tempNode; } I originally had not set the first node = to tail in this portion of the function: if((head == NULL) || (head->event_arrival_time > tempNode->event_arrival_time)) { if (debug_mode) {cout << "ERROR POINT K" << endl;} tempNode->next = head;head = tempNode; tail = tempNode; return; } I have placed various debug points to see where the program stops. Please provide a [mre] to get a comprehensive answer. If you really wants to consult crystalballs, then please include runtime call stack error or no one could help you. IMHO, using debugger should be suffice in this case. Second, why do you need a handmade Queue while you may leverage std::queue? Thank you for the reply. I brand new to Stack Overflow, and a novice progammer, so I am unfamiliar with the standards of debugging complex code. As for not using a queue library, I am used to implementing the queues myself, as per standard in school assignments. I am gonna isolate the class in a separate file and debug there. Thank you for your valuable input. You may simplified your project like this example in godbolt.org. It helps me tons of times for isloating the problems. Still, I suggest to learn how to use debugger. Keywords: "How to debug with ". Also get your implemented queue unit tested is another good practice. You suggest that with enough iterations, your enqueueEvent function crashes. However, that does not guarantee this function is the problem. If you happen to also be dequeuing stuff, that might be the cause. As part of your investigation, you should consider this as a possibility and test for it. It could even be your class constructor is not initializing stuff. Who knows? We can only guess here, with a limited view of your program. One clear error in your code is the expression: current->next->event_arrival_time <= tempNode>event_arrival_time -- I'm certain you intended tempNode->event_arrival_time in there. If your queue assumes a particular ordering to function correctly, then this error potentially breaks your ordering and might lead to issues. Of further note: instead of duplicating your ordering logic (which as I showed above is broken in one place), it might be more robust to simply your empty-list case and have the ordering logic in only one place. Then, finish the function with: if (current == head) head = tempNode; -- and you almost certainly need to update the tail anyway: if (tempNode->next == NULL) tail = tempNode; I think my '-' was missing in my pasted code, because I have that in mine. That simplified logic makes sense, thanks! I am learning to use the debugger and seeing the spot my program stops working is due to a segmentation fault at: if((head == NULL) || (head->event_arrival_time > tempNode->event_arrival_time)) @CarsonHolland head is an invalid but non-null pointer. This is caused by something in the rest of your code. @CarsonHolland As for not using a queue library, I am used to implementing the queues myself, as per standard in school assignments. -- To be honest, these types of restrictions make no sense if the overall goal is far greater than implementing a linked list or queue class. If your goal is to implement time event simulations, that is enough work as it is without having to fight with home-made buggy linked lists/queue classes. The school / teacher should be honest and admit that this homework is a "write a queue class" in disguise. Thank you so very much for your valuable input. I think I'm simply gonna use the c++ queue library to ensure I'm using functions that have been well tested. You all have brought valuable insight into my issues. Thank you!
STACK_EXCHANGE
The setup is as follows Campus Manager = 4.0.8 Cisco View = 6.1.5 ACS Appliance = 4.0.1 (44) In ACS there is a User Group defined as "Group A", which has privileges on "per NDG" basis. NDG1 contains "Ciscoworks Servers, Master\Slave" NDG2 contains "specific set of devices". Privileges are as below 1. NDG1 --> View, View Devices 2. NDG2 --> View, View Devices 1. NDG2 --> Read-Only 1. NDG 1, NDG2 --> Launch Topology Services, UT View, Port Attributes, VLAN Report Cisco View works exactly as expected and the User is only able to View (Even List) only the devices contained in NDG 2. However in Campus Manager, the User is able to A. Launch the topology services window (as exepected) B. But he can View ALL the devices from the DCR and can view Topology maps etc ? Why is he Not being limited to viewing devices\topology maps etc for ONLY the devices in NDG2 (as was the case with Cisco View) ? This sounds like a bug. The user should be limited to only the matched devices based on their NDG membership. You should open a TAC service request so more analysis can be done. Before that, though, double-check that you have the correct radio button checked to enforce NDG usage for the Campus Manager application (i.e. you're not accidentally allowing access to all devices). You might also try restarting ACS and dmgtd to see if that causes the two to properly synchronize. Radio button is selected correctly "Assign a Ciscoworks Campus Manager on a per Network Device Group Basis". Re-starting the services didn't help, so i will go ahead and open a TAC Case. Apparently this is design limitation as per Cisco TAC +++++ From TAC ++++++ I have tested the Campus Manager and confirmed with the developer on this. It is the right behavior that the ACS can only control the Device Selector screen when we are in NDG. Since the Topology Services section is part of the java GUI, it is currently has no way to be controlled by Let me know if you have any further question. i had a similiar issue months ago. I wanted to setup different NDGs for different Usersgroups (SR 603890723). Everything worked fine except the Campus Manager Topology View. I guess the main problem is the App itself. Read the answer from the developers: This is what DE's said regarding the restriction of access to devices in topology view: It is not possible to restrict the topology view on a per user basis because the purpose of topology view itself is for viewing the entire set of devices that we are managing.Hence only the tasks that can be performed with the devices could be restricted and not the display of the device. The reason why the display of the devices itself cannot be restricted is that if each user has the permission to view only a set of devices it is difficult to draw the map if in case these devices are not in sequence.It could be shown only as disconnected devices for that user. I concluded that e.g. the Topology View is not able to show only a subset of devices. It would be nice if this turns into a bug and get solved some day. P.S. Your work for this forum is highly appreciated and gave me often the right hint! This is a bug as it violates the security implied by ACS integration and NDGs. Yes, drawing a topology map from an incomplete set of network devices MAY be messy, but we already offer that capability using OGS groups which may be arbitrary groupings of devices. Additionally, some networks may be so logically organized that all devices within an NDG are properly connected, and thus Campus can operate just fine. As for limiting tasks based on NDG assignment, this is also broken. While most tasks are prohibited on unauthorized devices, the Device Attributes task works on all devices. This can reveal too much about a device to a particular user, and violates the principles of least privilege and privilege segregation. many thanks for this clear wording! But as you could read the TAC - or better the DEs - had a different opinion to this issue. Thats the reason why some of my costumers are screw up about the LMS. They argued that several other tools are able to fulfill this task with a RADIUS and Cisco is not able to manage this with a CISCO NMS and Cisco ACS! Is there a BUG ID availiable? Does it help to raise a TAC SR again? I have filed CSCsk11553 to track this issue. I feel it is important enough to fix due to its security implications.
OPCFW_CODE
What a tumultuous week that was, but Brickset is back and better than ever! It all started on Tuesday last week when the site was experiencing network errors connecting to the database. The solution, I was told, was to move both servers to a new network. That started on Thursday but for one reason or another took a couple of days to be completed. Then, when it was moved, the server was running perfectly but nobody could connect to it reliably. I suspected a denial of service attack on the site, having found some very dodgy-looking usage in the server logs but the cause is now believed to be a DNS amplification attack on another server on the same network which was causing it to be flooded with bad traffic making the sites hosted there unreachable. The team at OnRamp Indiana did a great job at trying to control it and keeping me informed of progress but with the problem continuing and no end in sight yesterday morning I felt I had to do something. I started looking round for alternative hosting providers following similar, but shorter lived problems, last summer so I already had one lined up who understood my requirements and had provided a quote. So it was just a case of pressing the button, as it were. The order was placed at 8am yesterday, by midday the server had been built, by 2pm I'd configured it and by 4pm the database had been copied. After a quick change to the DNS settings at CloudFlare, Brickset was back on air. I wondered whether to re-host the old code, or just bite the bullet and launch the new site at the same time. Given that the new site had been used and tested by most of you over the last month and most of the problems had been ironed out, I thought I might as well launch it. The site is now on a dedicated server and seems to be running incredibly quickly. Complex database queries that used to take 1-2 seconds on the old server now run in about a quarter of that. The CloudFlare content delivery network will also be helping to speed up the delivery of static content to you, from servers located around the world. After breathing a sigh of relief at tea-time yesterday I thought I'd take a break for the rest of the day and tackle the outstanding problems today. I hope you'll excuse me for doing so. The known major issues are: - Country detection: Because all traffic comes through CloudFlare the method I'm currently using to detect your country isn't working. I believe CloudFlare provide another means to determine it so I will be investigating that today. In the meantime you can click on the flag at the top-right to change it. If the flag looks funny press crtl-F5. - Time zone: The server is now in the UK so the time-zone code needs to be changed, everything is 5 hours ahead of reality at the moment. - It seems emails sent when resetting passwords are not being received. The server is sending me email OK so I'll delve into the code and figure out the problem later. - The scheduled tasks to pull in data from Amazon etc. and crunch the database overnight are not running yet. Finally, for those of you that think the site is too bright: you'll find a setting in your profile where you can elect to have 'Brickset Blue' back again. Welcome back,everyone. Normal service has been resumed...
OPCFW_CODE
[Date Prev][Date Next][Thread Prev][Thread Next] - Subject: R: Re: How to export associative arrays to an embedded lua script? - From: "linuxfan@..." <linuxfan@...> - Date: Mon, 12 Oct 2015 14:08:23 +0200 (CEST) Hi list (or table?)! I'm still evaluating how to expose my pascal tables to lua scripts. So far, I understand that I have to use metatables linked to a table or to userdata. On the argument the manual is a bit ambiguous; it says "Tables and full userdata have individual metatables [other types share a metatable per type]". If all light userdata share the same metatable, I could have problems in the future. Moreover, I am not sure that lua scripts can "subscript" a light userdata the same way as a table. Supposing that I create a few global variables named "env", "gui"..., all of type light userdata, and I link a metatable on them, can I write in lua env.language = "en" touch = "up" then ... end and so on? If yes, this could be the way, but what if I will need more classes and/or different tables? Full metadata could do, if they behave like light userdata (if light userdata do what I want). I could store my pointers in the memory allocated by lua - no problem. So, what I want to know is the question above, plus some warning about difficulties I am not aware of. Remember, main storage must be done in host application, not scripts. Some tables will be read-only, some other will have special processing (rising events when accessed), and so on. But I must add that I am not afraid about scripts doing wrong things. Scripts will have to comply with a series of rules (for example, NOT change the metatable associated with host values). I would like to add some thoughts on your comments. Dirk said that Lua doesn't really know the name of a table it pushes on the stack. But if I create a table and assign it a name via lua_setglobal(), that table has a name, and lua knows it (because lua translates the source code in byte code through the name of that table). I think that if a table named "env" exists, will point to the same table, so it will have the same internal "pointer", and the same metatable; the global table will still contain a link from "env" to the pointer value, and one more link "env2" with the same value. I could even scan the whole global table and get the name from the pointer (not efficient, ok). This is where the lua manual is a little ambiguous; it says that "every value can have a metatable", but what about two identical values? Anyway, I made a quick try storing the lua value for my just created tables, and they work as expected. Dirk said that this is not a good idea, because lua doesn't assure the pointer stays valid, perhaps after a garbage collection. I don't see why lua should move a table to a different address, while there are global variables referencing that table, but who knows. In this case, all the ideas about storing the lua value somewhere (the registry included) fail. A __index host function should only rely on the table value it has got on the stack, but it could look up a key - inside- that table. The only problem is that a lua script could modify that value, but my lua scripts *should not* do it. This idea was from Philipp, and I want to say that is a quick and easy one; I am not concerned about collisions: table keys are "well known" keys, and it would be an error to access other keys. After talking about light userdata (see my question before), Dirk talks about freepascal. Yes, my project uses Delphi *and* freepascal. Actually, I took the freepascal bindings and they work nicely under delphi. Rena offered 4 ways. Store the table name in the metatable). Nice and straightforward, I have to think about it. (2 Map a value->tablename in the registry). Doesn't work if it is true that reference to a table can become (3 Keep the name in an upvalue). I've still to understand what an upvalue is. If upvalues are contexts for closures, then it is another nice solution to think about. (4 Use userdata instead). I must investigate userdata, I suspect that they are not perfect for what I want. It was subtle the mention to C switch statement, as Boris also did. I will have no more than 8-9 tables, so any switch statement is faster than a lookup in a lua table. After having received a reply to my question at the beginning (how compares a metatable attached to a userdata instead of attached to a table), I will go ahead and will let Salutations again, sorry for the long message,
OPCFW_CODE
Yea sorry, it’s a tricky question in fact and in another way, it’s easy to make a joke on this kind of speech. Moreover, I do not have 15+ years of experience, so they know a lot more than me obviously. However it ticks me because at my last job, the next guideline was to use Go for new back-end developments, even if only few back-end developers had experience with the language. I didn’t see it as a game changer for us in despite of the internal speech. Firstly, Go seems to be a good language. I don’t like it for the few I read and tried, but who cares? It can produced statically type checked executable with a build-in garbage collector (e.g. some techs see it as a good fit for AWS Lambda). It’s seems to be in my sense in-the-middle between Rust and Elixir: not safe as Rust but easier to master with similar efficiency (let’s say less than 2 times slower), and without the resilience (and the accidental pseudo infinite scalability) of Elixir but way faster per process and built-in concurrency patterns. But if you are looking for a the “chosen one language” because messing with many tools is a pain, it’s a choice to consider. Secondly, I understand the “cult” you can have on a tool or tool set. I feel the same with Elixir, even if I do not consider myself advanced on this topic. I’m remembering right now moments where people look at me before I talk saying as a joke: “Elixir of course?”. In fact, I try to not answer this myself because I feel a lot of “I really want to use Elixir so I suppose I can’t be impartial to choose the right tool for the right job”. It’s sometimes hard but we have to, as “software engineer”, to do our best to be pragmatic. A tech lead have to consider multiple parameters: the domain, the tech communities, the tool sets associated with languages, the internal team knowledge… Development speed and production costs can be estimated from this points. But we can add to the list your team wishes/believes is to take into account because it’s not false to say it’s a way to innovate and have fun at work (but not the only way of course). To put on the top the last one as main argument is a mistake. A recommendation could be to come back to challenge them about their speech, e.g. what is your current and next hosting strategy, which languages did you consider before making this choice, and so why Go, what kind of problem does it solved for you, why a no framework strategy (I really do not understand the last one BTW)
OPCFW_CODE
import pytest import os from pillowtalk import * import copy @pytest.fixture def APIs(): class API1(object): def __init__(self, login, password, home): vars(self).update(locals()) class API2(object): def __init__(self, login, password, home): vars(self).update(locals()) class MySession(SessionManager): pass return API1, API2, MySession def test_create_sessions(APIs): API1, API2, Session = APIs cred1 = {"login": "John", "password": "Thomason", "home": "www.johnnyt.com"} cred2 = {"login": "John", "password": "Flompson", "home": "www.johnnyf.com"} Session().register_connector(API1(**cred1), session_name="API1") Session().register_connector(API2(**cred2), session_name="API2") api1 = Session().API1 assert Session().session is not None api2 = Session().API2 assert Session().session is not None assert api1.__dict__ != api2.__dict__ assert api1.password == cred1["password"] assert api2.password == cred2["password"] def test_raise_attribute_error(): s = SessionManager() with pytest.raises(AttributeError): s.name def test_create_sessions(APIs): s = SessionManager() s.name = "before" this_dir = os.path.dirname(os.path.abspath(__file__)) test_dir = os.path.dirname(this_dir) file = os.path.join(test_dir, 'pickle_tests', 'pickle1.pkl') s.save(file) s.name = "after" assert SessionManager().name == "after" s.load(file) assert SessionManager().name == "before"
STACK_EDU
Do you want to choose a good Roblox username? The usernames for Roblox need to be unique because they do not allow you to choose the same username as another user. It can be a hard task to choose a cute Roblox username especially when there are millions of users are already joined. That is why it is difficult to find a username that is not taken. If the username that you selected is already chosen/taken, you can do two things to get over it. You can either add underscore or numbers to it. Even so, you may fail if you have taken the common numbers. This post will show you good, aesthetic, and cute Roblox usernames ideas for boys and girls that are not taken. The username ideas for Roblox can be found in this post and you can use them for inspiration. How do you make a good username on Roblox? You need to keep your username short and simple that can be considered as good username on Roblox. However, most simple and short usernames on Roblox are already taken. But there are still low digit usernames that are available. On the other hand, you can also use the Roblox username generator to generate usernames that are available. Use this URL to generate Roblox usernames: https://robloxden.com/username-generator/ Can Roblox users take past usernames? No, the past usernames cannot be used again. It is because you need unique Roblox usernames for this purpose. In a nutshell, you cannot use the same username you used before or any other user. If you will try to use the same username that is already taken, you will get this error message “This username is already in use”. You can bypass this hurdle by using numbers or an underscore in your username. However, I do not recommend using numbers or underscores as much as possible because they can complicate the username. Cute Roblox usernames Good Roblox usernames Aesthetic Roblox usernames It is difficult to choose a Roblox username because a lot of usernames are already taken. Hence, the post has the best Roblox username ideas that are available. If you do not want to choose the usernames that are not taken in this article, you can use them for inspiration to create your own username. If the username is not available or already taken, you can do two things to avoid this. First of all, try using numbers or underscores in the username. The second option is to add “its”, “im” or “the” at the front of your username ideas. Arsalan Rauf is an entrepreneur, freelancer, creative writer, and also a fountainhead of Green Hat Expert. Additionally, he is also an eminent researcher of Blogging, SEO, Internet Marketing, Social Media, premium accounts, codes, links, tips and tricks, etc.
OPCFW_CODE
|January 16th 2002 We are pleased to announce the availability of Open Commerce Services (OCS) from Advanced Network Systems Inc. (ANSI), specialists in designing, re-engineering, and evolving proprietary HP e3000 systems and applications into platform-neutral enterprise-scalable systems which integrate the old with the new to protect and leverage investment. Web based systems make greater demands on the e-business infrastructure, an infrastructure that can make or break a business, and which must therefore be Open, Scalable, Interoperable, and Secure. The new ANSI OCS framework is aimed specifically at Companies who now find they are reliant on limited life "proprietary systems" like the HP e3000; Companies who need to ensure that they achieve and maintain competitive edge through the new initiative of Internet linked infrastructure to collaborate with Business Partners and manage the Supply Chain. ANSI OCS is based on open standards that integrate with existing systems and can respond and evolve to rapid change. Our message is "Continue to develop with confidence on the HP e3000 knowing that the solutions are operating system, DBMS, and Hardware neutral". Regardless of choice of hardware, operating system, or DBMS, including MPE/iX or IMAGE , solutions written using the ANSI OCS frame-work will interoperate with other solutions that follow current industry open systems standards with little, if any reprogramming. For VPLUS applications, ANSI OCS provides ANSI Studio, an extensible platform-neutral GUI-based IDE to enable VPLUS to be evolved into J2EE compliant components; Servlets, JSP's and Java Applets, that can be deployed automatically to any J2EE compliant application server (ANSI-Web, WebLogic, WebSphere, BEA, Bluestone). For MPE specific clients, ANSI OCS provides the MPE/iX Enterprise Client API. The complete, real-time solution, for client access to Oracle, DB2, Sybase, SMTP servers, and Enterprise Applications (SAP/R3). The MPE/iX Enterprise Client supports embedded SQL, thus providing an alternative for the un-supported Oracle Gateway for the HP e3000. MPE/iX Enterprise Client includes facilities to create Oracle, SQL Server, or DB2 tables from IMAGE datasets. Data replication from the IMAGE data sets into the newly created tables is also supported. MPE specific client applications can be scaled to other platforms with MicroFocus COBOL or Oracles Pro*Cobol to recompile embedded SQL Cobol programs. For developing or evolving platform-neutral client applications, ANSI OCS provides ADBC, the Java-based client-side API's that provides direct, non-JDBC, access to IMAGE and to HP-ELOQUENCE, the multi-platform replacement for IMAGE. Also supported is platform-neutral client access to the MPE file system, MPE Intrinsics, Spooling, and KSAM. Integrating with ADBC is ANSI Web, the platform-neutral middle-tier J2EE standard application server that provides enterprise scalable database and connection pooling services. For developing or evolving platform-neutral access to heterogeneous Enterprise Information Systems (EIS), ANSI OCS provides J2EE Enterprise Resource Adapters, (ERA). Implementations of the platform-neutral J2EE Connector Architecture, ERA's simplify integration complexities by defining a standard architecture and uniform interface that enable EIS's to plug-and-play with J2EE compatible application servers. Examples include ERP, mainframe transaction processing (TP) and database systems (non-JDBC and JDBC compliant). Client access is achieved via the Common Client Interface (CCI), also an open standard, and part of the JDBC 3.0 specification. So changing the data source becomes a simple matter of changing the ERA. ANSI has ERA's availble for IMAGE and SAPdB, our preferred DBMS; other ERA's are available from DBMS suppliers or third parties. We are excited about ANSI's evolutionary approach to MPE legacy systems because it represents a viable and cost effective solution to the majority of the issues that have arisen from the termination of the HP e3000. Perhaps most importantly these solutions are not short term fixes but opportunities to evolve to open standards in a controlled and cost effective manner.
OPCFW_CODE
package gta_text.items; import gta_text.npcs.GameCharacter; import gta_text.*; import java.io.IOException; public class MediKit extends Item { public static final String NAME = "MediKit"; public MediKit() throws IOException { super(Item.FIRST_AID_KIT); this.other_names.add("Kit"); this.other_names.add("MedKit"); } public boolean use(GameCharacter character) throws IOException { if (character.getHealth() < character.getMaxHealth()) { character.increaseHealth(50, null); character.sendMsgToPlayer(ServerConn.ACTION_COL, "You use the " + this.name + ". You now have " + character.getHealth() + " health."); Server.game.informOthersOfMiscAction(character, null, character.name + " uses the " + this.name + "."); character.removeItem(this); } else { character.sendMsgToPlayer(ServerConn.ERROR_COL, "You don't need to use it. You are full of health."); } return true; } }
STACK_EDU
require_relative '../../lib/shadefinale_minesweeper/board.rb' describe Board do let(:board){Board.new} describe '#initialize' do it 'should have a 10x10 board' do expect(board.board_size).to eq(100) end it 'should have 9 mines' do expect(board.mine_count).to eq(9) end end describe "#clear" do it 'should clear board' do board.clear expect(board.mine_count).to eq(0) end end describe "#play" do it 'should remove flag from played square' do board.clear board.flag(2,2) board.play(2,2) expect(board.remaining_flags).to eq(9) end it 'should remove flag if square becomes revealed' do board.clear board.flag(2,2) expect(board.remaining_flags).to eq(8) board.play(5,5) expect(board.remaining_flags).to eq(9) end end describe '#get_neighbors' do it 'should have 3 neighbors for the origin (corner)' do expect(board.get_neighbors(9,9).length).to eq(3) end it 'should have 5 neighbors for the edge' do expect(board.get_neighbors(9,8).length).to eq(5) end it 'should have 8 neighbors for non-edge non-corner square' do expect(board.get_neighbors(8,8).length).to eq(8) end end describe '#flag_square' do it 'should properly set selected square to be flagged' do board.flag(2,3) expect(board.flagged?(2,3)).to be true end it 'should lower flag count when flagging 1 square' do board.flag(3,4) expect(board.remaining_flags).to eq(8) end it 'should lower flag count further when flagging multiple squares' do board.flag(8,8) board.flag(7,7) expect(board.remaining_flags).to eq(7) end it 'should toggle flag back off if trying to place flag more than once' do board.flag(9,9) board.flag(9,9) expect(board.remaining_flags).to eq(9) end it 'should raise error if out of flags to place' do board.flag(3,4) board.flag(4,5) board.flag(5,6) board.flag(6,7) board.flag(4,4) board.flag(3,6) board.flag(2,4) board.flag(3,7) board.flag(1,4) expect{board.flag(0,0)}.to raise_error("Out of flags!") end end context 'show board' do specify 'draw the board' do board.render end end end
STACK_EDU
SQL Selecting less significant entity from table I have a problem with some query from given result set i need to select the less detail row from table under some conditions. I have three selects that after union return this table SELECT A_ID, B_ID, 1 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 1 UNION SELECT A_ID, B_ID, 2 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 2 UNION SELECT A_ID, B_ID, 3 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 3 The result can be something like this 1000 100 1 1000 200 2 1000 300 3 From this table the final result should be 1000 100 1 The best case scenario is when a value is found then is no longer searched in next select. Some ideas ? EDIT: The solution presented by 'Jeffrey Kemp' one query works fine. 1000 100 1 1000 200 2 1000 300 3 1001 200 2 1001 300 3 result 1000 100 1 1001 200 2 Database: Oracle Database 10g Release <IP_ADDRESS>.0 - 64bit Production Without knowing the details of your query, this is one option to consider: SELECT * FROM ( SELECT * FROM ( SELECT A_ID, B_ID, 1 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 1 UNION SELECT A_ID, B_ID, 2 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 2 UNION SELECT A_ID, B_ID, 3 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 3 ) ORDER BY 3 ) WHERE ROWNUM = 1; An alternative is to add conditions to the queries to determine if they need to run at all: SELECT A_ID, B_ID, 1 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 1 UNION SELECT A_ID, B_ID, 2 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 2 WHERE NOT EXISTS (SPECIFIC CONDITION FOR LEVEL 1) UNION SELECT A_ID, B_ID, 3 FROM MY_TABLE JOIN MY_TABLE2 ON SPECIFIC CONDITION FOR LEVEL 3 WHERE NOT EXISTS (SPECIFIC CONDITION FOR LEVEL 1) AND NOT EXISTS (SPECIFIC CONDITION FOR LEVEL 2) Of course, I don't know the nature of your "specific conditions" so I don't know if this will work for you or not. Your answer is correct for my question, but would not work for my over all case where i need to have in result not single entity but a set. But still is same approach that I used. BTW note that the UNION s could be changed to UNION ALL here, since the results are guaranteed unique anyway. This might even improve performance somewhat.
STACK_EXCHANGE
I recently found my 2016 Lulzbot Mini 3D printer wasn’t printing the sides of models accurately: when I tried to print a box and lid pair, the lid was too small to fit the box. This error worried me, because the last time that problem happened it was the fault of stress fractures in the Y carriage supports, which took a lot of time and money to repair. After having a close look at the printer, I saw only one possible problem: a barely-noticeable hairline crack starting in the X axis Idler mount. Don’t worry if you can’t see the crack. It’s very small and in the early stages, so it’s not likely the cause of the problem. Looking for other possible causes of the problem, I remembered that I’d “calibrated” the extruder a little while ago, using instructions I’d found at various places on the net. As a result I had changed the Extruder Steps Per Unit from the 830.00 the printer had been running fine with to 890.00, an increase of over 7%. Not wanting to wait hours to print another copy of my box and lid pair, I pared that example down to its essentials: two nesting rectangles with 2 mm thick sides, and 0.5 mm clearance (0.25 mm clearance each side) horizontally and vertically. I chamfered the bottom faces of both rectangles to avoid any problems with overextrusion on the first few layers (“Elephant Foot”). I’ve posted the nested rectangles STL files and FreeCAD design file on Cults3d, as Side Thickness Test for 3D Printers. Printing those nested rectangles, I found that, just like the box and lid, the two didn’t nest. Measuring the rectangles’ sides with a digital micrometer, I found the nominally 2 mm thick rectangles were closer to 2.32 mm thick. Doing a little math showed me that the thick walls were the reason the rectangles didn’t nest: Assuming the error is centered on the position of the extruder nozzle, each half of a 2.32 mm thick side was 1.16 mm; intruding 0.16 mm into the gap between the rectangles. My 0.5 mm clearance, 0.25 mm per side, between the two rectangles was too small for the 0.16 * 2 oversized walls, producing a clearance of -0.07 mm per side of the rectangles. That negative clearance means the smaller rectangle is too large to nest into the larger one. So I had proof that the printer was overextruding (caveat: there are lots of reasons sides can be too thick; overextrusion is just one). Since I had previously increased the Extruder Steps Per Unit from 830.00 to 890.00 as a result of “calibration”, I set it back to 830.00 and printed the nesting rectangles again. This time, the rectangles just barely nested (just above zero mm clearance) and the rectangle sides were about 2.17 mm thick. A big improvement, but still too thick. I then reduced the extruder Steps Per Unit to 800.00 and printed the nested rectangles once more. This time most of the sides were about 2.09 mm thick, which again was an improvement. The original method I’d used to “calibrate” extrusion, which created a huge overextrusion, was to measure the amount of filament going into the extruder, and adjusting the extruder Steps Per Unit until a 20 mm extrusion command consumed 20 mm of filament. I can now safely say that method didn’t work for me. My new method is this: Adjust the extruder Steps Per Unit until the width of the 2 mm printed test rectangles is about 2 mm. Now that I seem to have solved the overextrusion problem, I’m moving on to other causes of the rectangle width problem, because some of the walls are now thicker than the others, suggesting a problem with belt tension in the X or Y axis. We’ll see how that goes.
OPCFW_CODE
The previous lesson introduced the Ribbon. The Ribbon is that strip across the top of the screen containing a large number of icons. Each of these icons is referred to as a Ribbon component, and the Ribbon contains many different types of component. Explaining how the different types of Ribbon component work is the subject of this lesson. Notice that the Ribbon is split into logical groups. For the Home tab on the Ribbon you can see there's the Clipboard group, the Font group, the Alignment group, the Number group, the Styles group, the Cells group, and the Editing group. Each of the commands in one of these groups relates to the name of the group. For example, all of the commands in the Number group refer to the manipulation of numbers. You can see that each of the tabs on the Ribbon follows the same system, placing all of the commands into logical groups. Let's now enter some test text into cell C4. I'll click in cell C4 and type the word Test. And then I'll click back on C4 again to make this the active cell. First of all, we'll look at the Normal button. We've already encountered some Normal buttons in the previous lesson. The Bold button in the Font group is a good example of a Normal button. I click on the Bold button, and the text in cell C4 becomes bold. The Italic button is also a Normal button. I'll click on Italic and the text is now both bold and italic. And if I click on the buttons again, each of the attributes are removed from the Test text and it returns to normal again. Now let's look at the Split button. This is the hardest button to understand. A good example of a Split button is the Underline button next to the Italic button. If you hover the mouse over the Underline button you'll see that it has two halves: a left half and a right half. And the right half is an inverted pyramid. You'll see these inverted pyramids all over the Ribbon, and they indicate that, when you click, you'll see a dropdown menu. The left hand side of a Split button operates in exactly the same way as a Normal button. Let's test the left hand half now. I click Underline and an underline is applied to the text in cell C4. And if I click the left hand half of the Split button again, the underline is removed. But now let's look at the right hand part of a Split button. When I click the inverted pyramid, you'll see that there are two different commands available. In this case it's simply a single underline or a double underline. I'll click double underline, and notice that two things have happened. First of all, the text in cell C4 now has a double underline. But also, the default behaviour of the left hand part of the button has changed to a double underline. So if I click the left hand part of the double underline button, the double underline disappears. And if I click again, the double underline is reapplied. In other words, we've changed the default behaviour of the Split button so that it now applies a double underline when you click the left hand part. Let's set the default back to a single underline now, by clicking the right hand part of the Split button and simply clicking Underline. And you can see that now, the Underline button will apply or remove a single underline from the active cell. Let's now look at a few more dropdown lists. The dropdown lists are lists that have that little triangle on the right hand side. In the previous lesson we've already seen that the font can be changed using a dropdown list. And you can also change the size of the text using a dropdown list. Those are both very simple dropdown lists, but let's look at a more complicated dropdown list by looking at the Home tab on the Ribbon and the Editing group. And I'll click the Find & Select dropdown list. Notice that there are many more options here, but particularly I want you to notice that some of these options have an ellipsis next to them. That's three dots in a row (...). Let's take the first option: Find... The ellipsis after Find... tells me that when I click Find..., a dialog will be displayed offering more choices. I'll click Find... now, and you can see the Find and Replace dialog has appeared. I'll now click the red cross in the top right hand corner of the Find and Replace dialog, to dismiss the dialog. Now let's look at a Rich Menu. For an example of a Rich Menu, I'm going to go to the View tab on the Ribbon, and in the Window group I'm going to click Freeze Panes. The Rich Menu is very similar to any other dropdown, but some help text is shown beneath each menu choice. This is Microsoft's way of encouraging you to use some more advanced features by explaining what they're going to do before you click the option. It's a kind of an "in your face help system". You'll understand what each of these options do later in the course, but for now I'll click back onto the worksheet to make the Rich Menu disappear. Now let's look at the most powerful type of dropdown list: a dropdown Gallery. For a good example of a dropdown Gallery, I'll click the Home tab on the Ribbon and, in the Styles group, the cell Styles Gallery. Galleries can visually demonstrate the effect of each choice before you actually make the choice. For example, if I hover the mouse cursor over the Bad style, you can see that the text in cell C4 has become red, and the same for all of the other options in the Cell Styles Gallery. But I'm not going to apply any of the options in the Cell Styles Gallery. I'm just going to click back onto the worksheet to the dismiss the Gallery. Now let's talk about Checkboxes. For an example of a Checkbox, I'm going to go to the View tab on the Ribbon and, in the Show Group, notice that there are four little boxes with ticks in them. These are Checkboxes. I'm going to click the Gridlines Checkbox. At the moment it has a little tick in it, but when I click the tick is removed, and notice that all of the gridlines have gone from the worksheet. If I now click the Checkbox once more, the tick reappears and so do the gridlines. Now let's look at a Dialog Launcher. I'll click the Home tab on the Ribbon, and to demonstrate the Dialog Launcher I'm going to use the Dialog Launcher in the Font Group. The Dialog Launcher is the small arrow in the bottom right hand corner of many groups. A Dialog Launcher launches a complex dialog, offering many more choices than are available within the group components. For example, I might want to put a strikethrough through the word Test. Now there isn't any option within the Font group to do that from the Ribbon, but I can do it from the Dialog Launcher. So I'll click the Dialog Launcher button, to launch the Format Cells dialog. And you can see there's an enormous number of different options on this dialog, offering many complex ways of formatting cells, but the option I wanted was the strikethrough. And notice there's a Strikethrough checkbox, so I'll click the Strikethrough checkbox and then click OK. And you can see that a Strikethrough has now been applied to the text in cell C4. Dialog Launchers usually provide some more expert features that aren't used by the average Excel user. But you'll see a few examples in this course of where we need to use the Dialog Launcher, and in the Expert Course in the series we'll use the Dialog Launchers a lot. Well you now understand how all of the different components on the Ribbon work. And you've completed Lesson 1-13: Understand Ribbon Components.
OPCFW_CODE
Several methods are available to manipulate nodes. The appendChild() method adds a node to the end of the childNodes list. Doing so updates all of the relationship pointers in the newly added node, the parent node, and the previous last child in the childNodes list. When complete, appendChild() returns the newly added node. Here is an example: let returnedNode = someNode.appendChild(newNode); console.log(returnedNode == newNode); // true console.log(someNode.lastChild == newNode); // true If the node passed into appendChild() is already part of the document, it is removed from its previous location and placed at the new location. No DOM node may exist in more than one location in a document. If you call appendChild() and pass in the first child of a parent, it will end up as the last child: // assume multiple children for someNode let returnedNode = someNode.appendChild(someNode.firstChild); console.log(returnedNode == someNode.firstChild); // false console.log(returnedNode == someNode.lastChild); // true When a node needs to be placed in a specific location within the childNodes list, use the insertBefore() method. The insertBefore() method accepts two arguments: The node to insert becomes the previous sibling of the reference node and is ultimately returned by the method. If the reference node is null, then insertBefore() acts the same as appendChild(), as this example shows: insert as last child returnedNode = someNode.insertBefore(newNode, null); console.log(newNode == someNode.lastChild); // true insert as the new first child. returnedNode = someNode.insertBefore(newNode, someNode.firstChild); console.log(returnedNode == newNode); // true console.log(newNode == someNode.firstChild); // true insert before last child. returnedNode = someNode.insertBefore(newNode, someNode.lastChild); console.log(newNode == someNode.childNodes[someNode.childNodes.length - 2]); // true Both appendChild() and insertBefore() insert nodes without removing any. The replaceChild() method accepts two arguments: The node to replace is returned by the function and is removed from the document tree completely while the inserted node takes its place. Here is an example:replace first child let returnedNode = someNode.replaceChild(newNode, someNode.firstChild); replace last child returnedNode = someNode.replaceChild(newNode, someNode.lastChild); When a node is inserted using replaceChild(), all of its relationship pointers are duplicated from the node it is replacing. The replaced node no longer has a specific location in the document. To remove a node without replacing it, you can use the removeChild() method. This method accepts a single argument, which is the node to remove. The removed node is then returned as the function value, as this example shows: Remove first child let formerFirstChild = someNode.removeChild(someNode.firstChild); remove last child. let formerLastChild = someNode.removeChild(someNode.lastChild); A node removed via removeChild() is still owned by the document but doesn't have a specific location in the document. To use the above method you must know the immediate parent node, which is accessible via the parentNode property. Not all node types can have child nodes, and these methods will throw errors if you attempt to use them on nodes that don't support children.PreviousNext
OPCFW_CODE
Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. @@IDENTITY vs SCOPE_IDENTITY() vs IDENT_CURRENT – Retrieve Last Inserted Identity of Record This was one of the most interesting blog posts I have ever written. This blog post I wrote as I have been receiving lots of questions related to identity. To avoid the potential problems associated with adding a trigger later on, always use SCOPE_IDENTITY() to return the identity of the recently added row in your T SQL Statement or Stored Procedure. Difference between DISTINCT and GROUP BY – Distinct vs Group By A DISTINCT and GROUP BY usually generate the same query plan, so performance should be the same across both query constructs. GROUP BY should be used to apply aggregate operators to each group. If all you need is to remove duplicates then use DISTINCT. If you are using sub-queries execution plan for that query varies so in that case you need to check the execution plan before making decision of which is faster. Index Seek Vs. Index Scan (Table Scan) Index Scan retrieves all the rows from the table. Index Seek retrieves selective rows from the table. Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate. Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table. Simple Puzzle Using Union and Union All What will be the output of following two SQL Scripts. First try to answer without running this two script in Query Editor. Here is the blog post with the answer of the puzzle listed above. Introduction to sys.dm_exec_query_optimizer_info sys.dm_exec_query_optimizer_info returns detailed statistics about the operation of the SQL Server query optimizer. You can use this view when tuning a workload to identify query optimization problems or improvements. For example, you can use the total number of optimizations, the elapsed time value, and the final cost value to compare the query optimizations of the current workload and any changes observed during the tuning process. All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. List All Column With Indentity Key In Specific Database A to the point blog post where I write a script which provides the answer right away to the question in the title. Introduction to Heap Structure – What is Heap? If the data of the table is not logically sorted, in other word there is no order of data specified in a table it is called as Heap Structure. If the index is created on a table, the data stored in the table are sorted logically and it is called as clustered index. If the index is created as a separate structure pointing location of the data it is called non clustered index. Fix : Error : There is already an object named ‘#temp’ in the database Recently, one of my regular blog readers emailed me with a question concerning the following error: Msg 2714, Level 16, State 6, Line 4 There is already an object named ‘#temp’ in the database. This reader has been encountering the above-mentioned error, and he is curious to know the reason behind this. Generate Report for Index Physical Statistics – SSMS A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar types of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Introduction to Extended Events – Finding Long Running Queries One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect the necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. World Shapefile Download and Upload to Database – Spatial Database One of the most popular blog posts where I explain how to use Spatial Database feature of SQL Server as well where to download the shape file of the world. If you have not read this one blog post, I suggest you to read it, I am sure it will for sure be a fun read. SQL SERVER 2012 – Improvement in Startup Options I often work with advanced features of the SQL Server and this really led me to change how SQL Server is starting up. Recently I was changing the startup options in SQL Server and I was very delighted when I saw the startup option screen in Denali. It has really improved and is very convenient to use. Now I realized that the more I use SQL Server 2012, the more I love it. Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave Here is a 200 second interview of Vinod Kumar I took right after completing the course. There are many people who said they would like to read the transcript of the video. Here I have generated the same. Right Aligning Numerics in SQL Server Management Studio (SSMS) SQL Server Management Studio is my most favorite tool and the comfort it provides for users is sometime very amazing. Recently I was retrieving numeric data in SSMS and I found it is very difficult to read them as they were all right aligned. Please pay attention to following image, you will notice that it is not easy to read the digits as we are used to read the numbers which are right aligned. T-SQL Constructs – *= and += – SQL in Sixty Seconds #009 – Video My friend Vinod came up with this new episode where he demonstrates how dot net developer can write familiar syntax using T-SQL constructs. T-SQL has many enhancements which are less explored. In this quick video we learn how T-SQL Constructions work. We will explore Declaration and Initialization of T-SQL Constructions. We can indeed improve our efficiency using this kind of simple tricks. I strongly suggest that all of us should keep this kind of tricks in our toolbox. Difference between DATABASEPROPERTY and DATABASEPROPERTYEX Earlier I asked a simple question on Facebook regarding difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server. You can view the original conversation there over here. The conversion immediately became very interesting and lots of healthy discussion happened on Facebook page. The best part of having conversation on Facebook page is the comfort it provides and leaner commenting interface. Online Index Rebuilding Index Improvement in SQL Server 2012 Have you ever faced a situation where you see something working but you feel it should not be working? Well, I had similar moments a few days ago. I knew that SQL Server 2008 supports online indexing. However, I also knew that I could not rebuild index ONLINE if I used VARCHAR(MAX), NVARCHAR(MAX) or a few other data types. While I was strongly holding on to my belief, I came across with that situation where I had to go online and do a little bit of reading at Book Online. Reference: Pinal Dave (https://blog.sqlauthority.com)
OPCFW_CODE
(c) SB-Software, [email protected] Scott's Wallpaper Switcher is a system tray tool designed to allow you to quickly switch between wallpapers on your windows desktop. You can also configure the program to automatically switch wallpapers at predetermined intervals (like every 5 minutes). There is a built in "boss key" feature where clicking the system tray icon will bring up a "safe" wallpaper just in case you are in the habit of running inappropriate wallpapers on your windows desktop. This software is freeware, and is free for noncommercial use. The install program will create a "Scotts wallpaper switcher" icon on your windows desktop. The are three ways to use the "boss feature" If you use any of the boss-key options, then auto-sequencing will be disabled until you re-open the main program window. The boss paper is configured by the little <pick boss paper> button. By default, it will pick windows XP "bliss" if you've got it. Buttons on the main window The program is really self explanatory, but here's a run-down if you get stuck: |<Add>||Add another wallpaper to the list| |<Delete>||Remove the currently selected wallpaper| |<Set Mode>||Toggle the mode between tiled, centered, or stretched| |<Move Up>||Move the paper up in the list| |<Move Down>||Move the paper down in the list| |<Sequence Now>||Pick the next paper in the list right now| |<Pick Boss Paper>||Select which paper will be the "boss" paper| |<Switch to Boss Paper Now>||Immediately switch to the boss paper| Configuration options on the main window: |Auto-Sequence||Causes wallpapers to change at predetermined intervals| |Enable Tray Icon Left Click "Boss" paper||Normally, when you click the tray icon, the program dialog will appear. However, you can also use this for boss mode. Select this option, and clicking the tray icon will display the boss paper. (You can still "right click" the tray icon to get a menu, open and close the program, etc)| |Auto-Load on Windows Startup||This will make the program automatically load on windows startup| There are several different modes that you can use to display your wallpaper. Generally, if your wallpaper is the same size as your desktop, then the mode does not matter. However, if your wallpaper is a different size than your desktop, then it'll have to be stretched, centered, or tiled to fill the whole desktop. |Center||The wallpaper is centered on the desktop. If the wallpaper is smaller than the desktop, then there'll be blank space around it| |Tile||The wallpaper is tiled (repeated) until it fills the entire desktop.| |Stretch||The wallpaper is stretched to fill the desktop. If the wallpaper has a different aspect ratio than the desktop, then the aspect ratio may become distorted.| |Nonlinear-1||Performs a nonlinear stretch on the wallpaper. This is intended to help with wallpapers whose aspect ratio does not match the screen. By using a nonlinear effect, objects near the center of the wallpaper will appear with a correct aspect ratio, while objects towards the edges will become distorted.| |Nonlinear-2||Like nonlinear-1, but with greater effect| |Nonlinear-3||Like nonlinear-2, but with greater effect| Contacting the Author: You can usually find him at http://www.sb-software.com/ or check his products page at http://www.sb-software.com/curproj.html
OPCFW_CODE
Managing Multiple PHP Versions with PHP Manager for IIS 7 Some time back I wrote a post about how to run multiple PHP versions on the same server with IIS (Running Multiple PHP Versions with IIS). While running multiple PHP versions wasn’t complicated, it wasn’t a no-brainer either. Today, Ruslan Yakushev (a Program Manager on the IIS team at Microsoft), announced the beta release the PHP Manager project on CodePlex: PHP Manager for IIS 7 – beta release. Not only does the PHP Manager make it a no-brainer to run different PHP versions side-by-side on IIS, it makes it easy to register PHP with IIS, configure various PHP settings, enable/disable PHP extensions, remotely manage PHP configuration via the php.ini file, and check the PHP runtime configuration and environment (i.e. see the output of phpinfo()). Read his announcement for a complete tour of this release (and provide feedback!). I’ll just take a quick look at how easy it is to get multiple PHP versions running in this post. This Week’s Link List (August 20, 2010) The dog days of summer are here. Seems like things are a bit slow lately…I only have a few links to share, in any case. But share I will…the lack of quantity does not impact their quality… Access Control with the Azure AppFabric SDK for PHP In my last post I used some bare-bones PHP code to explain how the Windows Azure AppFabric access control service works. Here, I’ll build on the ideas in that post to explain how to use some of the access control functionality that is available in the AppFabric SDK for PHP Developers. I will again build a barpatron.php client (i.e. a customer) that requests a token from the AppFabric access control service (ACS) (the bouncer). Upon receipt of a token, the client will present it to the bartender.php service (the bartender) to attempt to access a protected resource (drinks). If the service can successfully validate the token, the protected resource will be made available. Understanding Windows Azure AppFabric Access Control via PHP In a post I wrote a couple of weeks ago, Consuming SQL Azure Data with the OData SDK for PHP, I didn’t address how to protect SQL Azure OData feeds with the Windows Azure AppFabric access control service because, quite frankly, I didn’t understand how to do it at the time. What I aim to do in this post is share with you some of what I’ve learned since then. I won’t go directly into how to protect OData feeds with AppFabric access control service (ACS, for short), but I will use PHP to show you how ACS works. Community Input Needed on Direction of PHP Drivers for SQL Server With the recent production-ready release of the MS Drivers for SQL Server for PHP, the team is now completely focused on building the next version. To do that, they really need your input. The team would really appreciate your taking 10 minutes to fill out a survey here: http://www.zoomerang.com/Survey/WEB22AWD66CGAM, but time is running out. The survey closes at 5:00 pm today (Seattle time)! The information the team collects will help them determine and prioritize what features are important to the PHP community in the next release of these drivers. This Week’s Link List (August 13, 2010) I’ve been spending quite a bit of time recently trying to wrap my head around the access control functionality in the Windows Azure platform AppFabric, so you’ll find a couple of related links in this week’s list. You’ll also find several cloud-related announcements, mostly about new content that is available for developers, but also an announcement about getting one month of Windows Azure for free. All that said, I found this first link to be most interesting… Now Available: SQL Server Migration Assistant for MySQL! The SQL Server Migration Assistant (SSMA) team announced today the availability of the migration assistant for MySQL! (Yes, it supports SQL Server Express.) You can… PHP Drivers for SQL Server Released! Today, the SQL Server Driver for PHP team released the production-ready 2.0 versions of the SQLSRV and PDO_SQLSRV drivers for SQL Server. You can… I’m a Judge for My App is Better Challenge I recently found out that I was chosen to be on the judging panel for the My App is Better Challenge. I must say I’m honored and excited, but I also feel somewhat intimidated. Just look at the rest of the judging panel: Consuming SQL Azure Data with the OData SDK for PHP One of the interesting incubation projects that the SQL Azure team is working on is the SQL Azure OData Service. If that sentence makes no sense to you, then start by reading this short overview on the SQL Azure team blog: Introduction to Open Data Protocol and SQL Azure. I’ll paraphrase what I read in that post to inspire this post:
OPCFW_CODE
Incorporating big data, Hadoop, Spark and NoSQL in Data Warehouse Big data, Hadoop, in-memory analytics, Spark, self-service BI, analytical database servers, data virtualization, and NoSQL are just a few of the many new technologies and tools that have become available for developing BI systems. Most of them are very powerful and allow for development of more flexible and scalable BI systems. But which ones do you pick? Due to this waterfall of new developments, it’s becoming harder and harder for organizations to select the right tools. Which technologies are relevant? Are they mature? What are their use cases? These are all valid but difficult to answer questions. This seminar gives a clear and extensive overview of all the new developments and their inter-relationships. Technologies and techniques are explained, market overviews are presented, strengths and weaknesses are discussed, and guidelines and best practices are given. The biggest revolution in BI is evidently big data. Therefore, considerable time in the seminar is reserved for this intriguing topic. Hadoop, Spark, MapReduce, Hive, NoSQL, SQL-on-Hadoop are all explained. In addition, the relation with analytics is discussed extensively. This seminar gives you a unique opportunity to see and learn about all the new BI developments. It’s the perfect update for those interested in knowing how to make BI systems ready for the coming ten years. - The Changing World of Business Intelligence - Big Data: Hype or reality? - Operational intelligence: does it require online data warehouses? - Data warehouses in the cloud - Self-service BI - The business value of analytics - Hadoop Explained - The relationship between big data and analytics - The Hadoop software stack explained, including HDFS, MapReduce, YARN, Hive, Storm, Sqoop, Flume, and HBase - The balancing act: productivity versus scalability - Making big data available to a larger audience with SQL-on-Hadoop engines, such as Apache Drill and Hive, Apache Phoenix, Cloudera Impala, IBM BigSQL, JethroData, Pivotal HawQ, SparkSQL, and Splice Machine - Spark Explained - Spark is in-memory analytical processing - The interfaces: SQL, R, Scala, Python - Does Spark need Hadoop? - Use cases of Spark - NoSQL Explained - Classification of NoSQL database servers: key-value stores, document stores, column-family stores and graph data stores - Market overview: CouchDB, Cassandra, Cloudera, MongoDB, and Neo4j - Strong consistency or eventual consistency? - Why an aggregate data model? - How to analyze data stored in NoSQL databases - Overview of Analytical SQL Database Servers - Are classic SQL database servers more suitable for data warehousing? - Important performance improving features: column-oriented storage, in-database analytics - Market overview of analytical SQL database servers, Actian Matrix and Vector, Dell/EMC/Greenplum, Exasol, HP/Vertica, IBM/Pure Data Systems for Analytics, Kognitio, Microsoft, SAP HANA and Sybase IQ, SnowflakeDB, Teradata Appliance and Teradata Aster Database - Streaming Database Servers - What are streaming database servers, and why are they different from messaging products, such as Apache Kafka? - Streaming database servers support analytics at the speed of business - Different forms of operational BI: operational reporting, operational analytics, and embedded analytics - Market overview: Cisco ParStream, SQLStream Blaze, StreamBase - NewSQL databaseservers - NewSQL stands for high-performance transactional SQL database servers - Simpler transaction mechanisms to implement scale-out - What does the term geo-compliancy mean? - Market overview: Clustrix, GenieDB, MariaDB, NuoDB, Splice Machine, and VoltDB - Incorporating Big Data Technology in BI Systems - What re the use cases of Hadoop in classic data warehouse architectures? - Using streaming database servers for real-time analytics - What could be the role of NoSQL products? - Using Spark as performance booster for data marts - Closing Remarks In this seminar Rick van der Lans answers the following questions: - Learn about the trends and the technological developments related to business intelligence, analytics, data warehousing, and big data. - Discover the value of big data and analytics for organizations - Learn which products and technologies are winners and which ones are losers. - Learn how new and existing technologies, such as Hadoop, NoSQL and NewSQL, will help you create new opportunities in your organization. - Learn how more agile data business intelligence systems can be designed. - Learn how to embed big data and analytics in existing business intelligence architectures.
OPCFW_CODE
The Junos Space SDK can be used to develop three different types of applications. These applications are: Data applications that contain only user interface components and that are designed to consume web services exposed by external applications, Junos Space applications or or platform services or both. These can be either server side or user interface side mash ups that consolidate existing business logic from Junos Space platform, Junos Space Applications and external resources. Business logic applications that publish APIs through the platform's REST Web services interfaces for other hosted or external applications to consume. These applications do not have a built-in user interface. They enable north-bound interface integration and access to, and leverage of, Junos Space platform intelligence on proprietary or third-party solutions. Business logic applications can be used to build modular applications for easy customization across use cases. These applications can be client or server-side mashups composed of platform services and business logic. Modular applications enable delivery of modular functionality with the ability to plug in new capabilities or customizations based on need. For example, an application with base level functions common to all service providers can be augmented with custom business logic for individual service providers. Native applications with custom business logic and APIs published through the platform's REST web services, combined with a user interface that is accessed from within a Junos Space browser user interface. These applications, such as Network Activate, are fully hosted on, and accessed only from within, the Junos Space platform. Native applications can be used to develop custom workflows and functionality, as well as optionally combine platform services with external business intelligence. These applications can expose REST Web services for use by other hosted or external applications, and they can leverage or inherit the Junos Space browser UI framework and paradigms. The Junos Space SDK plug-in provides the following project types that enable you to create the different types of supported applications: EJB Project—An EJB project consists of server-side components incorporating business logic. The EJB can consume the platform services exposed using REST. For more information, see the Creating EJBs topic. Web Service Project—A Web service project is used to develop REST web services components. These are built on top of EJB projects and use EJB interfaces to the Junos Space platform interfaces. For more information, refer to the Creating REST Services topic. Web Project—A web project is used to develop a UI component used to create a custom browser-based front end to the platform's REST interface. For more information, see the Creating the UI topic. Utility Project—A utility project can be any Java project. Eclipse deploys utility jar/classes on the server that can be shared between applications. You can add utility project to your applications. For more information, see the topic Adding a Utility Project to an Existing Application. The Junos Space SDK plug-in supports three application models. These application models use a combination Web, EJB, and Web Serivice projects that are selected when you create a new Junos Space application. The three application models are: The components of the three application models are illustrated in the following figure. A System.sar and an EAR (Enterprise Archive) file are created for all three models. System.sar is a system archive file containing deployment-related information for the Junos Space application. The EAR is a deployable archive of the Junos Space application. Additional EJB/web service projects can be added to an existing Junos Space application for any of the three models. This model supports creation of the UI components (Web package), Web services component (Web services package) and server-side components (EJB packages). Server-side components are packaged in a JAR file and the UI and Web services components are packaged in a WAR file. Together these separate components would be packaged in an EAR file. This kind of application model is used to support those applications to be completely hosted and accessed from a Juniper-hosted environment. When you create a complete application the plug-in creates a Web, EJB, and Web service (WebSvc) project in Eclipse. This model supports creation of the server-side components and the Web services components only. These applications are packaged in JAR and WAR files respectively and then packaged in an EAR file. This application model is to support those applications for which the business model and the access layer in the form of Web services are to be hosted by the Juniper environment. When you create a Web service application, the plug-in creates an EJB and WebSvc project in Eclipse. This model supports creation of the UI components only. These applications are packaged in a WAR file that will eventually be packaged in an EAR file. This is a purely client-based application model. The application will contain a Web project in addition to a System.sar and an EAR. When you create a UI-only application, the plug-in creates only a Web project in Eclipse.
OPCFW_CODE
[Lazarus] What to replace Application.Processmessages with? bo.berglund at gmail.com Tue Oct 13 10:15:46 CEST 2020 I have been working some time to convert a rather big Delphi2007 Windows service application to a regular Linux program possible to run as a Linux systemd service. In Linux I was told that the application needs to be a regular program to run in a non-logged on setting on a Linux server (without desktop). So I have come a long way now and it feels like i am in the final All of the TCP/IP socket communications to a client config app and handling of serial ports and digital I/O has been solved. The program runs OK at its core as a non-gui app and I can use the existing client application on Windows to talk to it and configure the But now when I approach the actual core of the service, the ability to run a longish task on external hardware while still being able to handle incoming client requests etc I have found a possible problem... The original server on Windows was built as a TService application and had the GUI support maybe via a Delphi data module or otherwise so the Application object is available. I had to change this to a simple program instead and so I lost the In a lot of places in the task execution code there are Application.Processmessages in wait loops and these I had to switch off via a conditional named GUI_MODE and instead I have a sleep(T) call there. Example: while FContactResTestRunning and not FCancelOp do Here the booleans FContactResTestRunning and FCancelOp are supposed to be set in the main application on reception of certain events. The class file where all of this is coded is also used in a GUI application on Windows where the user communicates with the instrumentation manually from the app. In that setting Application is available and all works well. The service application itself (from 2004) is based on timers to perform things like execution of the measurement task itself in order to isolate that from the main application. That is why there is Application.Processmessages inside the wait loops so that things like reception of RS232 messages from the equipment can be handled and analyzed and flags be set which are waited for in the timer... The TTimer objects have been replaced by TFpTimer objects in the ported code and this seems to work fine, whereas TTimer does not. I know that the system should have been designed using threads instead, but that is not there and it is probably a too difficult project to try and replace the current structure with threads. Now I wonder if I could put something else into the loops so that the main object of Application.Processmessages will be handled, namely to let event functions run as needed. Can I for example use CheckSynchronize in these loops? I.e. Application.Processmessages ==> CheckSynchronize? Or is there any other way? Developer in Sweden More information about the lazarus
OPCFW_CODE
[Bug] tsconfig: remove dependency on "dom" types Is this a new bug? [X] I believe this is a new bug [X] I have searched the existing issues, and I could not find an existing issue for this bug Current Behavior The tsconfig used by this project allows your code to reference dom types. Doing so causes problems for end-users since the library is intended to be run server-side only. Our server projects intentionally do not make dom types available, so we're unable to use your client library. Compilation fails when encountering DOM type-references in Pinecone code, like RequestCredentials and WindowOrWorkerGlobalScope in runtime.d.ts. Expected Behavior Avoid publishing code that couples to DOM types when the library is intended to run in Node.js. Steps To Reproduce Tested with Node.js LTS (20). $ npm i @pinecone-database/pinecone typescript @types/node@^20 $ npm ls p@ /tmp/p ├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>└──<EMAIL_ADDRESS> $ npx tsc node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/control/runtime.d.ts:23:19 - error TS2304: Cannot find name 'RequestCredentials'. 23 credentials?: RequestCredentials; ... on and on ... tsconfig.json: { "compilerOptions": { // target node 20 (https://github.com/microsoft/TypeScript/wiki/Node-Target-Mapping) "lib": ["ES2023"], "module": "node16", "target": "ES2022", "declaration": true, "strict": true, "outDir": "build", "baseUrl": "./", "rootDir": "src", "esModuleInterop": true } } src/index.ts: import { Pinecone } from '@pinecone-database/pinecone' export function foo(pinecone: Pinecone) { console.log(pinecone) } Relevant log output No response Environment - OS: Ubuntu 23.10 - Node.js 20 $ npm ls p@ /tmp/p ├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>└──<EMAIL_ADDRESS> ### Additional Context _No response_ Hi @mpareja, thanks for reaching out! While the intention of our Typescript client is to run server-side, we purposefully do not prevent users from running in the browser, as it's sometimes helpful during the development stages. We do, however, issue a warning, which we hope users will heed when moving from development to production. We do not plan on removing dom types from the client at this time, but we will consider this in the future! Hi @aulorbe, Thanks for taking the time to respond! Just to be clear, I'm not suggesting precluding users from using the library in the browser. The concern I'm raising is that the Pinecone client is currently forcing all users to include DOM types in their Typescript builds. Including DOM types in builds targeting Node.js only systems is not great since DOM-only functions aren't actually available for execution under node. Hi @aulorbe, Thanks for taking the time to respond! Just to be clear, I'm not suggesting precluding users from using the library in the browser. The concern I'm raising is that the Pinecone client is currently forcing all users to include DOM types in their Typescript builds. Including DOM types in builds targeting Node.js only systems is not great since DOM-only functions aren't actually available for execution under node. Ah, I think I see where the confusion is. While our Typescript client can definitely run on Node (and is primarily aimed at users running in Node), it is not only a Node module. You can see here in our workflow files that we also build to allow users to run in non-Node runtimes, such as Edge. Edge is actually primarily where the dom types are required.
GITHUB_ARCHIVE
Recently Viewed Topics Security Center 3.4.5 Release Notes Watch the Security Center 3.4.5 Release Video by Tenable CTO, Ron Gula. The following list describes many of the changes that are included in Security Center version 3.4.5, the significant issues that have been resolved and notes for upgrading. A PDF file of these release notes is also available here. Starting with Security Center 3.4.5, if Security Center exceeds its license key IP limit, only administrator logins are allowed with limited functionality. An opportunity is given to upload a new license key to accommodate the excess IP count and restore functionality. Contact [email protected] to obtain a new license key if necessary. As with any application, it is always advisable to perform a backup of your Security Center installation before upgrading. Upgrading from 3.4.x There are no special upgrade notes for those users running Security Center 3.4.0 or later. The command syntax for an RPM upgrade is as follows: # rpm -Uvh <RPM Package File Name> Bundled third-party products updated include newer versions of: Apache, libpng, PHP and SQLite. Support has been added for the enhanced web application testing settings introduced with recent Nessus plugin modifications. It is important to understand the following requirements for web application test scans: - Only one web server can be scanned per web application test. - Scanned hosts must be specified within the Security Center scan page in the following format: [IP:domain_name] or [IP:hostname]. An example of a scanned system would be: New Scan options: Web Application Test Settings: - Enable Web Application Tests - Send POST Requests - HTTP Parameter Pollution - Test embedded web servers - Maximum Run Time (min) - Combos of arguments values - Stop at first flaw More information can be found at: http://blog.tenablesecurity.com/2009/06/enhanced-web-application-attacks-added-to-nessus.html. The following new reporting templates have been added: - Windows Patch Summary Per Host.xml - filters on plugin 38153 for a concise list of hosts that have missing SMB patches and which patches are missing. - Scanned Hosts in Last 90 Days.xml - lists all hosts with a completed scan in the last 90 days - Scanned Hosts in Last 30 Days.xml - lists all hosts with a completed scan in the last 30 days - Scanned Hosts in Last 7 Days.xml - lists all hosts with a completed scan in the last 7 days - CCE Configuration Summary.xml - Summary of all Nessus compliance checks that contain "CCE" in their name. This report will summarize the compliant and non-compliant hosts with respect to the FDCC and other SCAP style audits. - CCE Configuration Report.xml - Report of all Nessus compliance checks, tested hosts, tested Windows servers and raw test results that contain "CCE" in their name. This report will detail the compliant and non-compliant hosts with respect to the FDCC and other SCAP style audits. - PCI Configuration Summary.xml - Summary of all Nessus compliance checks that contain "PCI" in their name. This report will summarize the compliant and non-compliant hosts with respect to the PCI audit policies maintained by Tenable. - PCI Configuration Report.xml - Report of all Nessus compliance checks, tested hosts, tested Windows servers and raw test results that contain "PCI" in their name. This report will detail the compliant and non-compliant hosts with respect to the PCI audit polices maintained by Tenable. - Scan results import process improved - The cumulative database (HDB) will no longer be converted to .nessus during scan imports. The HDB conversion will occur as part of the nightly processes. - SSH/LCE connection reduction - performance improvement - Change default refresh time for Nessus from one to 12 hours - Increased the email size limit to 16MB - Security Center is now officially supported on CentOS 5 - First seen and last seen dates being shown for scan and new scan results (requires browser cache to be cleared after upgrade) - Delete Static Assets menu item's name has changed to Static Asset List Add/Edit/Delete (See New Screen) - Plugin IDs report filter (now accepts up to 16 plugin IDs vs. four) - PSM should be able to edit contents of a static asset range - Choosing an Asset List & an adhoc IP causes scan to fail - Sourcefire modified download process of Snort rules requiring change to snort_update.pl (version 2.8 Snort ruleset support). - Option to enable/disable Build splash screen from Admin login - Total Active IP count now correctly includes hosts scanned for compliance checks. - Policy plugin load page speed and stability improved.
OPCFW_CODE
Customizing the Start Screen in Windows 8 and Windows Server 2012 How can I customize the Start screen in Windows 8 and Windows Server 2012? Microsoft introduced a means to enforce Start screen layout using Group Policy in the Enterprise and RT editions of Windows 8.1, but it’s likely that most organizations will want an easy way to provide a default Start screen for users that they can then customize. PowerShell provides two cmdlets that can be used to capture a customized Start screen and then import the configuration to the default user profile, which is used as the basis for creating profiles as new users log on to a device for the first time. Exporting the Start Screen Layout Begin by deploying a machine that has all the apps installed that you want to pin to the Start screen. Customize the Start screen and pin applications as required. Once the Start screen has been customized manually on a reference machine, open an elevated PowerShell window. - Type powershell on the Start screen and select the app in the search results. To launch the console elevated, press CTRL+SHIFT+ENTER. - In the PowerShell console, run the following command: export-startlayout -as bin -path c: \customstartscreenlayout.bin –verbose Import a Customized Layout Now that we have a Start screen customization file, you can use the following import command to customize the default user profile, either on a live machine from an elevated PowerShell console, or as part of a script to build a new machine, or a System Center Configuration Manager (SCCM) or Microsoft Deployment Tool (MDT) task sequence. Say Goodbye to Traditional PC Lifecycle Management Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations. import-startlayout -layoutpath c:\customstartscreenlayout.bin -mountpath %systemdrive%\ To run the command from a batch file: powershell -noninteractive -command import-startlayout -layoutpath .\customstartscreenlayout.bin -mountpath %systemdrive%\ Note in the above command line I haven’t specified an explicit path for customstartscreenlayout.bin. It must be located in the working directory, i.e. the directory from which the batch file is launched. The mountpath parameter in the examples above, forces the command to change the default user profile on the local machine. If you want to run the command against an offline image, specify the path to your .wim image using this parameter.
OPCFW_CODE
|700€ + DDV Docker and Kubernetes are transforming the application landscape - and for good reason. This course is the perfect way to get yourself – and your teams – up to speed and ready to take your first steps. There are 5 modules in this course Take the next step in your software engineering career by getting skilled in container tools and technologies! Using containerization, organizations can move applications quickly and seamlessly among desktop, on-premises, and cloud platforms. In this beginner course on containers, learn how to build cloud native applications using current containerization tools and technologies such as Docker, container registries, Kubernetes, Red Hat, OpenShift, and Istio. Also learn how to deploy and scale your applications in any public, private, or hybrid cloud. By taking this course you will familiarize yourself with: - Docker objects, Dockerfile commands, container image naming, Docker networking, storage, and plugins - Kubernetes command line interface (CLI), or “kubectl” to manipulate objects, manage workloads in a Kubernetes cluster, and apply basic kubectl commands - ReplicaSets, autoscaling, rolling updates, ConfigMaps, Secrets, and service bindings - The similarities and differences between OpenShift and Kubernetes Each week, you will apply what you learn in hands-on, browser-based labs. By the end of the course, you’ll be able to build a container image, then deploy and scale your container on the cloud using OpenShift. The skills taught in this course are essential to anyone in the fields of software development, back-end & full-stack development, cloud architects, cloud system engineers, devops practitioners, site reliability engineers (SRE), cloud networking specialists and many other roles. What you'll learn Docker and Kubernetes are changing the way you build, ship, and manage your applications. In this course you will learn the fundamentals of Docker, Kubernetes and OpenShift. First, you will learn the basics of what a container is and how it enables cloud-native application designs. Next, you will explore the roles of Docker and Kubernetes, as well as the basics of how they work. Finally, you will discover how to prepare yourself and your organization to thrive in a container world. When you are finished with the course, you will have everything you need to take your container journey to the next level. Module 1: Containers and Containerization Start by learning about container concepts, features, use cases, and benefits. Building on your new knowledge of containers, you’ll learn what Docker does and discover why Docker is a winner with developers. You’ll learn what Docker is, become acquainted with Docker processes, and explore Docker’s underlying technology. Learn about how developers and organizations can benefit from using Docker and see which situations are challenging for using Docker. Next, learn how to build a container image using a Dockerfile, how to create a running container using that image, become familiar with the Docker command line interface (CLI), and explore frequently used Docker commands. You’ll become knowledgeable about Docker objects, Dockerfile commands, container image naming, and learn how Docker uses networks, storage, and plugins. Then, assimilate this knowledge when you see Docker architecture components in action and explore containerization using Docker. At the end you’ll pull an image from a Docker Hub registry. You’ll run an image as a container using Docker, build and tag an image using a Dockerfile, and push that image to a registry. Module 2: Kubernetes Basics You will learn what container orchestration is. Then, explore how developers can use container orchestration to create and manage complex container environment development lifecycles. Kubernetes is currently the most popular container orchestration platform. You’ll examine key Kubernetes architectural components, including control plane components and controllers. Explore Kubernetes objects, and learn how specific Kubernetes objects such as Pods, ReplicaSets, and Deployments work. Then, learn how developers use the Kubernetes command line interface (CLI), or “kubectl” to manipulate objects, manage workloads in a Kubernetes cluster, and apply basic kubectl commands. You’ll be able to differentiate the benefits and drawbacks of using imperative and declarative commands. At the end of this module, you will use the kubectl CLI commands to create resources on an actual Kubernetes cluster. At the end you’ll use the Kubernetes CLI to create a Kubernetes pod, create a Kubernetes deployment, create a ReplicaSet and see Kubernetes load balancing in action. Module 3: Managing Applications with Kubernetes You’ll explore ReplicaSets, autoscaling, rolling updates, ConfigMaps, Secrets, and service bindings, and learn how you can use these capabilities to manage Kubernetes applications. You’ll learn how ReplicaSets scale applications to meet increasing demand, and how autoscaling creates dynamic demand-based scaling. You’ll see how to use rolling updates to publish application updates and roll back changes without interrupting the user experience. You’ll learn how to use ConfigMaps and Secrets to provide configuration variables and sensitive information to your deployments and to keep your code clean. At the end you’ll scale and update applications deployed in Kubernetes. Module 4: The Kubernetes ecosystem: OpenShift, Istio, etc. You’ll learn more about the growing Kubernetes ecosystem and explore additional tools that work well with Kubernetes to support cloud-native development. You’ll gain an understanding of the similarities and differences between Red Hat ® OpenShift® and Kubernetes and see what OpenShift architecture looks like. You’ll learn about OpenShift builds and BuildConfigs, and OpenShift build strategies and triggers. You'll also discover how operators can deploy whole applications with ease. Finally, you’ll examine how the Istio service mesh manages and secures traffic and communication between an application’s services. At the end you’ll use the oc CLI to perform commands on an OpenShift cluster. And you’ll use the OpenShift build capabilities to deploy an application from source code stored in a Git repository. Module 5: Final Assignment For the Final Project, you will put into practice the tools and concepts learned in this course, and deploy a simple guestbook application with Docker and Kubernetes.The entire application will be deployed and managed on OpenShift. The course is technical, because of this it is expected that participants are capable of typing and have a general knowledge about computers and programs. To participate in a course it's helpful to have a foundational understanding of certain concepts and technologies. Here are some general prerequisites that are recommended but not mandatory: Za več informacij nas kontaktirajte na telefonsko številko: 01 568 40 40 ali [email protected].
OPCFW_CODE
Digital platforms are all around us. Platforms like Twitter, YouTube, AirBnB, and Upwork are more than just websites. They create a place where vendors, customers, and agencies come to buy or sell services. Consumers don’t connect with the platform owners. The platform is self-service, available on demand, and formed around the needs of most clients. Platforms are now essential for the way we connect with others. When discussing “platform engineering,” we examine a more specific type of work. Digital adoption is essential for companies to succeed in the digital economy. Platform engineering is using existing software, hardware, and services to create new solutions that add value to the user. Platform engineers create platforms for software developers. Like Twitter users share different opinions, software developers must share different skills. Every person working on a software project has additional input to the project. A platform engineer’s job is to make it easy to talk for these people to communicate. For developers, platforms are a kind of digital glue. They stick a team together, creating accessible and efficient communications. They are an essential step in ongoing digital adaption. In this article, we will introduce the essential trend of platform engineering. To kick off our discussion, we’ll discuss how it was created and explain its many advantages for development teams. Then, you’ll receive a brief overview of some important processes in platform engineering that can help streamline your workflows. What Is Platform Engineering? The challenges for agile software development teams are not new. Developing great software has always required collaboration. But it is tough to share their progress when each specialist developer works with different tools. Platform engineering solves this old problem for our cloud-native era. Platform engineers work closely with all stakeholders to create a platform optimized for performance, security, and reliability. They design, build and maintain an internal Developer Platforms (or IDP) infrastructure. The IDP is a digital product that links many other services. Platform engineering is becoming an important trend in software development. To understand why we must know why it is superior to older solutions. Why Is Platform Engineering Important? In the old days of software development, software engineers were constantly waiting for contributions from other teams. Platform engineering makes a major cultural shift away from this inefficient system. It helps different teams to offer self-service solutions to one another in real time. In an older software development model, exchanges between teams were very inefficient. Application developers created code, but before seeing it work, they had to send the product to a builder, tester, sysadmin, operations managers, and more. In one popular phrase, developers had to “throw it over the wall” before moving forward. It took a lot of work just to develop a workable product. Agile Software Development was designed to make this process faster. Individual agile teams had many of the skills necessary for workable solutions. But in practice, infrastructure services were still handled by other teams. A platform engineering team develops platforms to help every team share their work. An Internal Developer Platform means that every team can operate independently while constantly sharing their work with others. What are the benefits of using Platform Engineers? Platform engineering gives a new solution for old problems. This innovation brings many benefits. In particular: - All developers have a much better working experience. Working with others is low-friction: it is easy to complete the routine tasks that used to be so difficult. - Easy to bring new engineers into a development team. Every software team works differently, with distinct steps to reach a working product. When a dedicated platform supports those steps, new employees can do their job more effectively and quickly. - Developers are free to work autonomously. Platform engineering gives everyone the freedom to work through their own problems. With a platform connecting self-service features, developers can plug into different operations for a working solution. - Easy to address common issues. A good platform engineering team will listen to feedback and ensure the internal developer platform can adjust to their problems. These features mean that software development teams can work much more efficiently. Principles of Platform Engineering As Gartner defines it, platforms are a kind of “middleware.” It stands in between different products. However, they must still adapt to a set of best practices. Every platform needs a purpose Platforms are not an off-the-shelf service. The particular needs of an organization must drive them. That’s why each business needs a dedicated platform engineer who can respond better to the specific needs of the development tasks. The platform is a product The platform engineering team is not a service desk. Their purpose is not to answer every single query from day to day. Instead, they must listen to input and feedback and create a solution that customers can use on a self-service basis. The platform should be usable, reliable, trustworthy, and adaptable. No need to repeat work Yes, platform engineering teams help to solve specific problems within software engineering organizations. However, they do not need to start with every problem from scratch. Consider leading examples like AWS, GCP, Azure, and the IBM Cloud platform. These services mean developers don’t consider the infrastructure underlying the development environment. When Is The Right Time To Introduce a Platform Engineering Team? The smallest development teams do not need a dedicated platform team. The right time to create the team depends on several factors, including the organization’s size, the complexity of its systems, and the frequency of software releases. As an organization grows, managing and maintaining software systems become increasingly difficult. When many different units try to collaborate on a project, the time will be right to hire a platform engineer to create and maintain the IDP. If the software systems have become highly complex, a platform engineer can ensure that the software systems are scalable, maintainable, and reliable. Finally, if the organization releases software updates frequently, a Platform Engineering team can help manage the release process and ensure that the updates are delivered reliably and efficiently. Platform Engineering Makes Everything Easier Overall, we can see that platform engineering has a very valuable purpose. This field has a very positive future. As a 2020 Deloitte report explained, “When designed correctly, platforms can become powerful catalysts for rich ecosystems of resources and participants.” Companies that invest in platforms will quickly find positive results. Unsurprisingly, major companies like Tesla and Google implement development platforms as part of their digital transformation roadmaps. Platform engineering offers many benefits to software development teams, including better working experiences, easy onboarding of new employees, autonomous working, and easy solutions to common issues.
OPCFW_CODE
from __future__ import annotations import functools from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, Union, cast import rich.repr if TYPE_CHECKING: from . import Flight from .lazy import LazyTraffic from .mixins import _HBox @rich.repr.auto() class FlightIterator: """ A FlightIterator is a specific structure providing helpers after methods applied on a Flight that return a sequence of pieces of trajectories. Methods returning a FlightIterator include: - ``Flight.split("10T")`` iterates over pieces of trajectories separated by more than 10 minutes without data; - ``Flight.go_around("LFBO")`` iterates over landing attempts on a given airport; - ``Flight.aligned_on_ils("LFBO")`` iterates over segments of trajectories aligned with any of the runways at LFBO. - and more. Since a FlightIterator is not a Flight, you can: - iterate on it with a for loop, or with Python built-ins functions; - index it with bracket notation (using positive integers or slices); - get True if the sequence is non empty with ``.has()``; - get the first element in the sequence with ``.next()``; - count the element in the sequence with ``.sum()``; - concatenate all elements in the sequence with ``.all()``; - get the biggest/shortest element with ``.max()``/``.min()``. By default, comparison is made on duration. .. warning:: **FlightIterator instances consume themselves out**. If you store a FlightIterator in a variable, calling methods twice in a row will yield different results. In Jupyter environments, representing the FlightIterator will consume it too. To avoid issues, the best practice is to **not** store any FlightIterator in a variable. """ def __init__(self, generator: Iterator["Flight"]) -> None: self.generator = generator self.cache: list["Flight"] = list() self.iterator: None | Iterator["Flight"] = None def __next__(self) -> "Flight": if self.iterator is None: self.iterator = iter(self) return next(self.iterator) def __iter__(self) -> Iterator["Flight"]: yield from self.cache for elt in self.generator: self.cache.append(elt) yield elt def __len__(self) -> int: return sum(1 for _ in self) @functools.lru_cache() def _repr_html_(self) -> str: title = "<h3><b>FlightIterator</b></h3>" concat: None | "Flight" | "_HBox" = None for segment in self: concat = segment if concat is None else concat | segment return title + ( concat._repr_html_() if concat is not None else "Empty sequence" ) def __rich_repr__(self) -> rich.repr.Result: for i, segment in enumerate(self): if i == 0: if segment.flight_id: yield segment.flight_id else: yield "icao24", segment.icao24 yield "callsign", segment.callsign yield "start", format(segment.start) yield f"duration_{i}", format(segment.duration) def __getitem__( self, index: Union[int, slice] ) -> Union["Flight", "FlightIterator"]: if isinstance(index, int): for i, elt in enumerate(self): if i == index: return elt if isinstance(index, slice): if index.step is not None and index.step <= 0: raise ValueError("Negative steps are not supported") def gen() -> Iterator["Flight"]: assert isinstance(index, slice) modulo_start = None for i, elt in enumerate(self): if index.start is not None and i < index.start: continue if index.stop is not None and i >= index.stop: continue if modulo_start is None: base = index.step if index.step is not None else 1 modulo_start = i % base if i % base == modulo_start: yield elt return self.__class__(gen()) raise TypeError("The index must be an integer or a slice") def has(self) -> bool: """Returns True if the FlightIterator is not empty. Example usage: >>> flight.emergency().has() True This is equivalent to: >>> flight.has("emergency") """ return self.next() is not None def next(self) -> Optional["Flight"]: """Returns the first/next element in the FlightIterator. Example usage: >>> first_attempt = flight.runway_change().next() This is equivalent to: >>> flight.next("runway_change") """ return next(self, None) def final(self) -> Optional["Flight"]: """Returns the final (last) element in the FlightIterator. Example usage: >>> first_attempt = flight.runway_change().final() This is equivalent to: >>> flight.final("runway_change") """ segment = None for segment in self: continue return segment def sum(self) -> int: """Returns the size of the FlightIterator. Example usage: >>> flight.go_around().sum() 1 This is equivalent to: >>> flight.sum("go_around") """ return len(self) def all(self, flight_id: None | str = None) -> Optional["Flight"]: """Returns the concatenation of elements in the FlightIterator. >>> flight.aligned_on_ils("LFBO").all() This is equivalent to: >>> flight.all(lambda f: f.aligned_on_ils("LFBO")) >>> flight.all('aligned_on_ils("LFBO")') """ from traffic.core import Flight, Traffic if flight_id is None: t = Traffic.from_flights(flight for i, flight in enumerate(self)) else: t = Traffic.from_flights( flight.assign(flight_id=flight_id.format(self=flight, i=i)) for i, flight in enumerate(self) ) if t is None: return None return Flight(t.data) def max(self, key: str = "duration") -> Optional["Flight"]: """Returns the biggest element in the Iterator. By default, comparison is based on duration. >>> flight.query("altitude < 5000").split().max() but it can be set on start time as well (the last event to start) >>> flight.query("altitude < 5000").split().max(key="start") """ return max(self, key=lambda x: getattr(x, key), default=None) def min(self, key: str = "duration") -> Optional["Flight"]: """Returns the shortest element in the Iterator. By default, comparison is based on duration. >>> flight.query("altitude < 5000").split().min() but it can be set on ending time as well (the first event to stop) >>> flight.query("altitude < 5000").split().min(key="stop") """ return min(self, key=lambda x: getattr(x, key), default=None) def __call__( self, fun: Callable[..., "LazyTraffic"], *args: Any, **kwargs: Any, ) -> Optional["Flight"]: from traffic.core import Flight, Traffic in_ = Traffic.from_flights( segment.assign(index_=i) for i, segment in enumerate(self) ) if in_ is None: return None out_ = fun(in_, *args, **kwargs).eval() if out_ is None: return None return Flight(out_.data) def plot(self, *args: Any, **kwargs: Any) -> None: """Plots all elements in the structure. Arguments as passed as is to the `Flight.plot()` method. """ for segment in self: segment.plot(*args, **kwargs) def flight_iterator( fun: Callable[..., Iterator["Flight"]] ) -> Callable[..., FlightIterator]: msg = ( "The @flight_iterator decorator can only be set on methods " ' annotated with an Iterator["Flight"] return type.' f' Got {fun.__annotations__["return"]}' ) if not ( fun.__annotations__["return"] == Iterator["Flight"] or eval(fun.__annotations__["return"]) == Iterator["Flight"] ): print(eval(fun.__annotations__["return"])) print(Iterator["Flight"]) raise TypeError(msg) @functools.wraps(fun, updated=("__dict__", "__annotations__")) def fun_wrapper(*args: Any, **kwargs: Any) -> FlightIterator: return FlightIterator(fun(*args, **kwargs)) fun_wrapper = cast(Callable[..., FlightIterator], fun_wrapper) fun_wrapper.__annotations__["return"] = FlightIterator return fun_wrapper
STACK_EDU
// ====================================================================== /*! * \brief Astronomical calculations * * Solar calculations * * Based on NOAA JavaScript at * <http://www.srrb.noaa.gov/highlights/sunrise/azel.html> * <http://www.srrb.noaa.gov/highlights/sunrise/sunrise.html> * * Reference: * Solar Calculation Details * <http://www.srrb.noaa.gov/highlights/sunrise/calcdetails.html> * * Transformation to C++ by Ilmatieteen Laitos, 2008. * * License: * UNKNOWN (not stated in JavaScript) */ // ====================================================================== #pragma once #include "Exception.h" #include <cmath> #include <boost/date_time/local_time/local_time.hpp> #include <boost/date_time/posix_time/posix_time.hpp> #include <boost/math/constants/constants.hpp> namespace Fmi { namespace Astronomy { #define INTDIV(x) (x) inline double rad2deg(double rad) { return rad * boost::math::constants::radian<double>(); } inline double deg2rad(double deg) { return deg * boost::math::constants::degree<double>(); } inline double sin_deg(double deg) { return sin(deg2rad(deg)); } inline double cos_deg(double deg) { return cos(deg2rad(deg)); } inline double tan_deg(double deg) { return tan(deg2rad(deg)); } /* Clamp to range [a,b] */ inline void clamp_to(double& v, double a, double b) { v = (v < a) ? a : (v > b) ? b : v; } /* * Check 'lon' and 'lat' parameters for validity; clamp to (-180,180] and * [-89.8,89.9] range before calculations. */ inline void check_lonlat(double& lon, double& lat) { if (fabs(lon) > 180.0) throw Fmi::Exception(BCP, "Longitude must be in range [-180,180]"); if (fabs(lat) > 90.0) throw Fmi::Exception(BCP, "Latitude must be in range [-90,90]"); clamp_to(lat, -89.8, 89.8); // exclude poles } /* * hour angle of the Sun at sunrise for the latitude * * Returns: hour angle of sunrise/set in radians ('nan' if no sunrise/set) */ inline double HourAngleSunrise_or_set(double lat, double solarDec, bool rise) { double ha = acos(cos_deg(90.833) / (cos_deg(lat) * cos_deg(solarDec)) - tan_deg(lat) * tan_deg(solarDec)); return rise ? ha : -ha; // rad } inline double rad(double d) { return d * 0.017453292519943295; } inline double Deg(double d1) { return (d1 * 180) / 3.1415926535897931; } inline double julianDay(const boost::posix_time::ptime& utc) { double d3 = utc.time_of_day().total_seconds(); double d1 = utc.date().day() + d3 / 86400; int month = utc.date().month(); int year = utc.date().year(); if (month <= 2) { month += 12; year--; } int k1 = year / 100; int l1 = (2 - k1) + k1 / 4; double d2 = static_cast<long>(365.25 * (year + 4716)) + static_cast<long>(30.600100000000001 * (month + 1)) + d1 + l1 + -1524.5; return d2; } inline double reduce(double d1) { d1 -= 6.2831853071795862 * static_cast<int>(d1 / 6.2831853071795862); if (d1 < 0.0) d1 += 6.2831853071795862; return d1; } /** * Takes the day, month, year and hours in the day and returns the * modified julian day number defined as mjd = jd - 2400000.5 * checked OK for Greg era dates - 26th Dec 02 */ inline double modifiedJulianDate(short month, short day, short year) { if (month <= 2) { month += 12; year--; } double a = 10000.0 * year + 100.0 * month + day; double b = 0.0; if (a <= 15821004.1) { b = -2 * (int)((year + 4716) / 4) - 1179; } else { b = (int)(year / 400) - (int)(year / 100) + (int)(year / 4); } a = 365 * year - 679004; return a + b + (int)(30.6001 * (month + 1)) + day; } } // namespace Astronomy } // namespace Fmi // ======================================================================
STACK_EDU
Simple. Well, not quite: somewhere between the two commands a little scaling and resizing takes place, and with fonts you also need to tweak and adjust the width or WMFIN will just bring that text back as TEXT entities. Here is what appears to be a very long method to do it correctly: once you get a hang of the steps, this is indeed quite a simple task. - Setup the text style to use an width other than exactly 1. It can be .9999 or 1.000001 for example. - Create the text or use property painter to 'paint' the new info to - Some where in the drawing, create a line that will be used for reference later. THIS IS IMPORTANT. - At the command prompt, type WMFOUT. - Select the text and the reference line (real easy if you have the text on a separate layer). - Erase (or freeze) the 'real' text but keep the reference line. - At the command prompt, type WMFIN & select the wmf file you just created - Notice the wmf doesn't come in at the same scale - the reason for the - After selecting the base point, use the default scale & rotation - Now, move the block made by importing the WMF so one endpoint of the wmf's ref line matches up with the corresponding endpoint of the original - At the command prompt, type SCALE and select the wmf block - For the base point, select the endpoint you used to match up ref lines with - the common endpoint. - At the command prompt, type R for reference. - Pick the 'common' endpoint. - Pick the other endpoint of the wmf's ref line. - Lastly pick the other endpoint of the original ref line. - Now you can explode the block and the text should be lines. - If you're using True Type fonts, you'll get lots of little lines. To clean it up easily do the following from 19-26 unless you really want to just sit there erasing and cleaning up manually. - After the explode, use CHPROP or PROPERTIES and select the objects with the PREVIOUS selection option. - Put these objects on a layer by themselves for easy removal. - Draw a rectangle around the text. - Use the BOUNDARY command and pick a point between the rectangle and the - Freeze or lock every layer but the layer the text is on and erase it. - Change the boundary you created to that layer for easy removal later - Use the boundary command again and pick "inside" the outline of the - Freeze or lock every layer but the layer the first boundary is on and
OPCFW_CODE
Making S3 (almost) as fast as local memory AWS Lambda is amazing, and as I talked about last time, can be used for some pretty serious compute. But many of our potential use cases involve data preprocessing and data manipulation – the so-called extract, transform, and load (ETL) part of data science. Often people code up Hadoop or Spark jobs to do this. I think for many scientists, #thecloudistoodamnhard – learning to write Hadoop and Spark jobs requires both thinking about cluster allocation as well as learning a Java/Scala stack that many are unfamiliar with. What if I just want to resize some images or extract some simple features ? Can we use AWS Lambda for high-throughput ETL workloads? I wanted to see how fast PyWren could get the job done. This necessitated benchmarking S3 read and write from within Lambda. I wrote an example first write a bunch of objects with pseudorandom data to s3, and then read those objects out. The data is pseudorandom to make sure our results aren’t confounded by compression, and everything is done with streaming file-like objects in Python. To write 1800 2GB objects to the S3 bucket jonas-pywren-benchmark I can use the following $ python s3_benchmark.py write --bucket_name=jonas-pywren-benchmark \ --mb_per_file=2000 --number=1800 --key_file=big_keys.txt Each object is placed at a random key, and the keys used are written big_keys.txt, one per line. This additionally generates a python pickle (start time, stop time, transfer rate) for each job. We can then read these generated s3 objects with with : $ python s3_benchmark.py read --bucket_name=jonas-pywren-benchmark \ We can look at the distribution of per-job throughput, the job runtimes for the read and write benchmarks, and the total aggregate throughput (colors are consistent – green is write, blue is read): Note that at the peak, we have over 6O GB/sec read and 50 GB/sec write to S3 – that’s nearly half a terabit a second of IO! For comparison, high-end Intel Haswell Xeons get about ~100GB/sec to RAM. On average, we see per-object write speeds of ~30 MB/sec and read speeds are ~40MB/sec. The amazing part is this is nearly linear scaling of read and write throughput to S3. I struggle to comprehend this level of scaling. We’re working on some applications now where this really changes our ability to quickly and easily try out new machine learning and data analysis pipelines. As I mentioned when talking about compute throughput, this is peak performance – it’s likely real workloads will be a bit slower. Still, these rates are amazing!
OPCFW_CODE
Change in energy ideal gas I am supposed to calculate the change in energy upon changing both the temperature from $T_1$ to $T_2$ and the volume from $V_1$ to $V_2$. Now I was wondering whether this solution is correct: We can treat both transition independent from each other, as the energy is not path-dependent and therefore we could have $$\Delta(E) = \frac{3}{2} k_BN (T_2-T_1) -N k_B T ln(\frac{V_2}{V_1})$$. What is $T$ on the right hand side? Also, where did your second term on the right hand side come from? @NowIGetToLearnWhatAHeadIs yes, actually I don't really know. The second term comes from $\int_{V_1}^{V_2} \frac{N k_B T}{V} dV = N k_B T ln( \frac{V_2}{V_1})$ Your idea is correct but the calculation is not. For $N$ moles of an ideal gas you have $U = cNRT$, look that if $T$ doesn't change, $U$ also doesn't change. So you start at the state $(T_1,V_1)$ and want to go to the state $(T_2,V_2)$. You then use two process corresponding to the following sequence of states: $$(T_1,V_1)\to (T_2,V_1)\to (T_2,V_2)$$ On the first, you indeed have $\Delta U_1 = cNRT_2 - cNRT_1 = cNR(T_2-T_1)$ but the second one however has temperature equal to $T_2$ so that $\Delta U_2 = 0$. This implies that the total change in energy is $\Delta U = cNR(T_2-T_1)$ which is just the first term you wrote. Edit: About your explanation to the second term, the quantity $Nk_B T/V$ is indeed pressure, since $PV = Nk_B T$. Now, because of that your integral gives in truth the negative of the amount of work on the second process. Recall that $\delta W = -PdV$ and so $$W = -\int_\Gamma PdV$$ But by the first law of Thermodynamics, $\Delta U = W + Q$, so that change of energy is not made up just of work, but of heat also. By the formula for energy you could then see that for this process $Q = -W$. why does a change in volume not result in a change of energy? I mean there is the integral $\Delta E = \int p dV = \int \frac{N k_B T}{V} dV = N k_B T ln(V)$ well, actually this answer surprises me. I was first suppose to calculate the energy by changing the temperature of an ideal gas from $T_1$ to $T_2$. THen I was supposed to do it for a volume change $V_1$ to $V_2$, after that for both couples processes(which is essentially my question). Finally, I am now supposed to calculate the necessary volume change to maintain the energy if we raise the temperature from $T_1$ to $T_2$. If I understand you correctly, then it is impossible to accomplish the last situation. I didn't say to calculate the volume change to maintain the energy. I said that since the energy of an ideal gas depends just on the temperature, on the second process the change will be zero because both initial and final temperature will be $T_2$. In that case the total change in energy will just come from the first process. What I said on the edit, is that the integral you speak of gives you work, not total change in energy, which accounts for heat as well. Now, if you use the formula $U = cNRT$ for the energy, you see that on the second process $\Delta U_2 = 0$. yes, but this is a homework question and there I am also supposed to say how much I need to change my volume in order to maintain the energy if the temperature is raised. If I understand you correctly, this is impossible. Again, if temperature is held fixed, you cannot change the internal energy of the gas, this would violate the equation of state $U = cNRT$. I believe the problem was built with the intention of making you see that the internal energy of an ideal gas depends just on its temperature. So it is really impossible to do it. To be able to change the energy back you would need to change the temperature back anyway. No, your solution is not correct. The energy difference is by definition the first term in your formula, you should drop the second term. The work done during the process is path dependent, so you don't have enough information to calculate the work done, neither the transferred heat (which is the sum of the energy difference and the work). One possible way to perform that process is to first heat the gas and keep the volume fixed (for this the work is zero and the heat is given by your formula), then (assuming $V_2>V_1$) you instanteniously change the volume (move the walls out). This does not change the velocity of any particles, so no energy and temperature change happens, and no heat is transferred. This is of course an irreversible process, you can perform the change in the volume in reversible ways too, giving some nonzero work and nonzero heat if you keep T constant. well, actually this answer surprises me. I was first suppose to calculate the energy by changing the temperature of an ideal gas from $T_1$ to $T_2$. THen I was supposed to do it for a volume change $V_1$ to $V_2$, after that for both couples processes(which is essentially my question). Finally, I am now supposed to calculate the necessary volume change to maintain the energy if we raise the temperature from $T_1$ to $T_2$. If I understand you correctly, then it is impossible to accomplish the last situation. As long as the temperature of an ideal gas does not change its internal energy will remain the same. Change in volume without changing the temperature as in an isothermal process will not bring about any change in the internal energy of the gas. However a change in temperature of the gas by any means would result in change in internal energy of the gas as given by the first term on the right of your expression for energy change.
STACK_EXCHANGE
import pytest from lyncs_io.mpi_io import Decomposition from lyncs_io.testing import mark_mpi # TODO: Generalize on higher dimensions (Currently tested for cart_dim<=2) @mark_mpi def test_comm_types(): from mpi4py import MPI # Requires a communicator with pytest.raises(TypeError): Decomposition() comm = MPI.COMM_WORLD size = comm.size rank = comm.rank # Check Graph topology index, edges = [0], [] for i in range(size): pos = index[-1] index.append(pos + 2) edges.append((i - 1) % size) edges.append((i + 1) % size) topo = comm.Create_graph(index[1:], edges) with pytest.raises(TypeError): Decomposition(comm=topo) topo.Free() # Check DistGraph topology sources = [rank] degrees = [3] destinations = [(rank - 1) % size, rank, (rank + 1) % size] topo = comm.Create_dist_graph(sources, degrees, destinations, MPI.UNWEIGHTED) with pytest.raises(TypeError): Decomposition(comm=topo) topo.Free() # Check Cartesian topology ndims = 2 dims = MPI.Compute_dims(size, [0] * ndims) topo = comm.Create_cart(dims=dims, periods=[False] * ndims, reorder=False) decomp = Decomposition(comm=topo) assert dims == decomp.dims assert topo.Get_coords(rank) == decomp.coords topo.Free() # Check COMM_WORLD decomp = Decomposition(comm=comm) assert [size] == decomp.dims assert [rank] == decomp.coords @mark_mpi def test_mpi_property(): from mpi4py import MPI assert hasattr(Decomposition(MPI.COMM_WORLD), "MPI") @mark_mpi def test_comm_Decomposition(): from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.size rank = comm.rank dec = Decomposition(comm=comm) # No remainder domain = [8 * size, 12] globalsz, localsz, start = dec.decompose(domain=domain) assert domain == globalsz assert [8, 12] == localsz if rank == 0: assert [0, 0] == start elif rank == size - 1: assert [8 * (size - 1), 0] == start # Remainder=1 domain = [8 * size + 1, 12] globalsz, localsz, start = dec.decompose(domain=domain) assert domain == globalsz if rank == 0: # First process takes the remainder assert [9, 12] == localsz assert [0, 0] == start elif rank == size - 1: assert [8, 12] == localsz assert [8 * (size - 1) + 1, 0] == start # More workers than data with pytest.raises(ValueError): dec.decompose(domain=[0] * len(domain)) @mark_mpi def test_cart_decomposition(): from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.size rank = comm.rank # TODO: Ensure testing generalizes in arbitrary dimension ndims = 2 dims = MPI.Compute_dims(size, [0] * ndims) topo = comm.Create_cart(dims=dims, periods=[False] * ndims, reorder=False) coords = topo.Get_coords(rank) dec = Decomposition(comm=topo) # No remainder domain = [8 * dims[0], 8 * dims[1], 4, 4] globalsz, localsz, start = dec.decompose(domain=domain) assert domain == globalsz assert [8, 8, 4, 4] == localsz if coords[0] == 0 and coords[1] == 0: assert [0, 0, 0, 0] == start elif coords[0] == dims[0] and coords[1] == dims[1]: assert [8 * (dims[0] - 1), 8 * (dims[1] - 1), 0, 0] == start # Remainder=1 in each dimension domain = [8 * dims[0] + 1, 8 * dims[1] + 1, 4, 4] globalsz, localsz, start = dec.decompose(domain=domain) assert domain == globalsz if coords[0] == 0 and coords[1] == 0: assert [9, 9, 4, 4] == localsz assert [0, 0, 0, 0] == start elif coords[0] == dims[0] and coords[1] == dims[1]: assert [8 * (dims[0] - 1) + 1, 8 * (dims[1] - 1) + 1, 0, 0] == start # More workers than data with pytest.raises(ValueError): dec.decompose(domain=[0] * len(domain)) @mark_mpi def test_comm_composition(): from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.size rank = comm.rank dec = Decomposition(comm=comm) # No remainder local_size = [8, 8] globalsz, localsz, start = dec.compose(local_size) assert [size * 8, 8] == globalsz assert local_size == localsz assert [rank * 8, 0] == start # Remainder=1 if rank == 0: local_size = [9, 8] globalsz, localsz, start = dec.compose(local_size) assert [size * 8 + 1, 8] == globalsz assert local_size == localsz if rank == 0: assert [rank * 8, 0] == start else: assert [rank * 8 + 1, 0] == start @mark_mpi def test_cart_composition(): from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.size rank = comm.rank # TODO: Ensure testing generalizes in arbitrary dimension ndims = 2 dims = MPI.Compute_dims(size, [0] * ndims) topo = comm.Create_cart(dims=dims, periods=[False] * ndims, reorder=False) coords = topo.Get_coords(rank) dec = Decomposition(comm=topo) # No remainder local_size = [8, 8, 4, 4] globalsz, localsz, start = dec.compose(domain=local_size) assert [dims[0] * 8, dims[1] * 8, 4, 4] == globalsz assert local_size == localsz assert [coords[0] * 8, coords[1] * 8, 0, 0] == start # Remainder=1 in horizontal dimension local_size = [8, 8, 4, 4] if coords[0] == 0: local_size = [9, 8, 4, 4] globalsz, localsz, start = dec.compose(local_size) assert [dims[0] * 8 + 1, dims[1] * 8, 4, 4] == globalsz assert local_size == localsz if coords[0] > 0: assert [coords[0] * 8 + 1, coords[1] * 8, 0, 0] == start else: assert [0, coords[1] * 8, 0, 0] == start # Remainder=1 in vertical dimension local_size = [8, 8, 4, 4] if coords[1] == 0: local_size = [8, 9, 4, 4] globalsz, localsz, start = dec.compose(local_size) assert [dims[0] * 8, dims[1] * 8 + 1, 4, 4] == globalsz assert local_size == localsz if coords[1] > 0: assert [coords[0] * 8, coords[1] * 8 + 1, 0, 0] == start else: assert [coords[0] * 8, 0, 0, 0] == start # Remainder=1 in each dimension local_size = [8, 8, 4, 4] if coords[0] == 0 and coords[1] == 0: local_size = [9, 9, 4, 4] elif coords[1] == 0: local_size = [8, 9, 4, 4] elif coords[0] == 0: local_size = [9, 8, 4, 4] globalsz, localsz, start = dec.compose(local_size) assert [dims[0] * 8 + 1, dims[1] * 8 + 1, 4, 4] == globalsz assert local_size == localsz if coords[0] == 0 and coords[1] == 0: assert [0, 0, 0, 0] == start elif coords[0] == 0 and coords[1] > 0: assert [0, coords[1] * 8 + 1, 0, 0] == start elif coords[0] > 0 and coords[1] == 0: assert [coords[0] * 8 + 1, 0, 0, 0] == start else: assert [coords[0] * 8 + 1, coords[1] * 8 + 1, 0, 0] == start
STACK_EDU
ProxyMesh offers an easy way to make use of open proxies. The typical way of using open proxies involves compiling a long list of IPs that must be updated at least once a day, and using software that can automatically switch between these IPs. These open proxy IPs must also be regularly checked to ensure they're still in operation, and are not inserting malicious code into your requests. ProxyMesh eliminates these requirements. All you have to do is configure your client to access the ProxyMesh open proxy server, and ProxyMesh takes care of the rest. To learn more & signup, go to the ProxyMesh Pricing Page. And for details on how this open proxy server works, read on... Open Proxy IPs The ProxyMesh open proxy server maintains a list of known open proxy IPs, which are used by the proxy server to forward your requests. This works just like the US & UK proxies, except instead of going thru low-latency elite proxies, your requests are forwarded thru less reliable & higher latency open proxies. While a large percentage of these proxies are in the US, over half are located in many different countries around the world. These proxies typically do not stay online very long, and are not operated on reliable infrastructure. The tradeoff for this lack of reliability is a huge increase in quantity & variability of IP addresses. The ProxyMesh open proxies list typically countains at least 1000 IPs at a time, with approximately 100 IPs changing every hour. Because open proxies are not reliable, real-time error checking is required to provide a consistent service. If any request thru an open proxy fails due to a proxy error, that error is recorded, and the request is re-tried up to 3 more times, using a different proxy each time. Any proxy that gets 3 or more errors will immediately be removed from the list. To keep a fresh open proxy list, the proxy list is re-checked every 15 minutes, and any proxies that fail these checks are removed. These checks test the following: - a valid request can be sent thru the open proxy - a valid response is received within 3 seconds - that response has not been corrupted and does not contain malicious code - the open proxy does not have a known abusive IP address Over 95% of open proxies fail these tests. Therefore, the ProxyMesh open proxy server only keeps the 5% of open proxies that are actually usable. Even with periodic proxy checks & realtime removal of unreliable proxies, you must have a retry strategy when using this proxy server. Open proxies are, by their nature, unreliable & error prone. They could go down at any time, be configured incorrectly, or they may not honor your specific request. For these reasons, you should retry all 40x & 50x response codes at least 3 times, and optionally make use of the custom ProxyMesh headers. The way this could work is: - make a request thru the open proxy server - get a 40x or 50x response error - from the response, extract the IP from the X-ProxyMesh-IP header - retry the request with a X-ProxyMesh-Not-IP header containing the IP The X-ProxyMesh-Not-IP header can take a comma list of IP addresses, so you can accumulate bad IPs to skip for future requests. If you do this, it is recommended to cache the IPs for a maximum of 1 day, as they will likely be out-of-date or offline after 24hrs. Elite vs Open Proxies The ProxyMesh open proxies server provides a very different set of tradeoffs compared to the US & UK proxy servers. With the open proxy server, you get quantity & variability at the expense of speed & reliability. And the US & UK proxies are what's known as elite proxies, in that they remove all identifying information from requests, whereas the open proxies provide no guarantee of anonymity. So if you need speed, reliability, and anonymity, then the US & UK proxies are the best choice. But if you want a lot of IPs, the open proxy server might be an acceptable tradeoff.
OPCFW_CODE
IBM i Open Source Business Architect Lays Out A Plan January 18, 2017 Dan Burger Enterprise level application development is no place for open source languages. Can you believe it? That was once the widely accepted truth. Jiminy Crickets! Things have changed. The number of the stable open source distributions available with comprehensive support and maintenance goes well beyond common knowledge. Industry giants, successful SMB players, and mom and pop businesses are finding good reasons to use open source. Even IBM uses open source for internal business reasons. There are reasons for you to do the same. Before you recoil and brace for a “mind your own business and I’ll mind mine” retaliation against open source, observe what’s happening compared to what your information technology is doing for your business and possibly to your business. Is it holding you back or moving you forward? Are you only seeing costs when you should be looking for savings? These are simple questions, but they should be asked and answered frequently. “During the past two years one of the biggest areas of new development and one of the areas that affect how app dev is going to be done is open source,” says Steve Will, an open source advocate who also happens to be the chief architect for the IBM i operating system. “The number of people talking about open source on i has grown tremendously.” It’s a good story. And it’s a true story, too. But percentage growth looks best when comparisons begin with small numbers. Just a couple of years ago, the number of IBM i pros interested in open source probably wouldn’t fill a school bus. But now a single IBM i OSS (open source software) LinkedIn group has more than 650 members, and there are other indicators of increasing IBM i open source enthusiasm. Twenty-nine open source-related sessions are on the online session guide for the COMMON Annual Meeting and Exposition scheduled for May 7-10 in Orlando, Florida. That’s close to 10 percent of the total conference educational sessions without even accounting for the Linux sessions, which by the way, seem to be fewer than the number of Linux sessions at the 2016 conference. During Will’s nine-plus years as IBM i chief architect the emphasis on open source has steadily increased. Late last year, he chose Jesse Gorzinski to fill a new position called business architect for IBM i open source. The business architect title reflects the focus on aligning technology with business IBM’s business as well as IBM’s customers’ and partners’ businesses. Open source software was formerly part of the technology load carried by Tim Rowe, the IBM i business architect for application development and systems management. “The strategy of the platform is to embrace more open source,” Will says. “It made sense to break off open source from traditional application development and Jesse has skills in the open source area.” Gorzinski is also the co-chair of the IBM i ISV Advisory Council. “Today the options are across the board whether talking about servers (the Apache HTTP server, for example) or (development) languages,” Gorzinski says. “The languages are the biggest noise makers. They are driving a lot of interest. We have clients and business partners looking for ways leverage these languages to do their stuff.” He singled out the modernization of RPG, C, or COBOL investments using Web services, mobile, and other Web technologies. “There are cases where we see growth in RPG development because of the new ways we can leverage RPG with open source,” he says. “The things people want to do with their modernization efforts are typically aligned very well with what the open source languages and frameworks offer.” To its credit, IBM has delivered the integration pieces from Node.js and Python that allow quick ways to talk with DB2. And on the skill side of things, Gorzinski sees IBM i shops increasing their investments in a larger application development strategy that modernizes RPG applications and the surrounding app dev tools. “Just about anybody can be an IBM i developer these days. That’s a huge part of the business proposition of these languages. There are college kids graduating with these open source skills,” he says. “I am hearing from people who are writing applications on IBM i that would not have been doing so if the open source language options were not available. Ruby, Python, PHP, Node and modern RPG are languages that non-RPG developers can understand.” Gorzinski says there are three key pieces in the IBM i open source roadmap. The first is to continue to expand open source technology that can be used on the platform, especially where it makes the most sense for IBM i business. The second is to further enable clients and partners to extend and use these technologies. That would most likely include more integration pieces, more educational opportunities, and a high level of collaboration with ISVs. The third key is continually growing the IBM i open source community. IBM is doing its part in the community by listening, observing, participating, and establishing priorities. Unwilling to expose too much of his playbook, Gorzinski says there will soon be new deliveries related to Git (version control), Perl, and SQLite. Although the current IBM i development community seems more oriented to RPG developers learning new development languages than it does toward young developers with open source skills but little or no IBM i skills, Gorzinski describes it as a mix of classic RPG developers who see the value that open source brings and developers who are new to IBM i but are learning to write code for that system. In some ways, open source development is reminiscent of when Java first made inroads into IBM AS/400 development. Java was the “outside development community” then. Now they are insiders looking at the new group of outsiders. “I see Java developers being pulled to the new open source languages just like they were previously pulled to Java,” Gorzinski says. “Back in the day, Java had an open source-style community that really bloomed. Java and RPG integration was a big thing. Graphing, reporting and Web stuff all available by integrating with Java. Now the languages that are blooming are PHP, Ruby, Node, and Python.” “One of the reasons Java became big was because businesses saw there were more and more Java developers and they wanted to apply those skills to their businesses,” Will says. “Now we are in an environment where those same businesses are looking to apply the newest developers and development skills. The Java community, like the RPG community, grew from business needs. We won’t see 100 percent of the Java community embracing new tools and learning how to use them, but a lot of them are.” The future of businesses programming will not rely on a single language. Decisions on which languages to use will be based on options that are best suited for the task and that have value based on low maintenance and ease of support.
OPCFW_CODE
I loved SotS's focus on simple mechanics that allow player choices to shape the story. A lot of the advice on getting players to provide setting details are practices that I have already used for years, such as having players create atmosphere by having them recount travails of long journeys. It is just nice to see these techniques as an official part of a game. These narrative practices are backed up by mechanics, such as attributes that represent both a value, as well as a pool that can be spent to affect rolls or buy story effects. In SotS, you use a handful of Investigative and General abilities to shape the world, and thus the story. This allows the game to retain a lot of the simple magic of fantasy roleplaying, which I found dwindling away when I played 3E and 4E briefly. The feel of freedom created by exchanging long lists of KEWL POWURZ for simple narrative mechanics replicates well what Dr Bargle called the Pathetic Aesthetic of the (original, pre 5E) OSR. I think SotS will be a great antidote to the rules bloat seen in the progression of D&D over editions. With the inclusion of Proficiencies, then Feats & Abilities, D&D killed lots of the magic of play for me by trying to mechanically codify all actions with new rule subsystems. 4e went too far with rules for my taste, while 5e seems a step back to the rulings over rules mode of the original, pre 5E OSR movement. That said, old D&D and other FRPGS were far from perfect. The OSR added lots of much needed shot of improvisational freedom to gaming, a thing which was lacking when I started playing back in the 80s, when a cult of TSR 'sanctioned' rules defined how many played and ran the game. So instead of going into SotS here, I'd like to think about ways retroclones can be storified in SotS style to promote improvisational DMing and emergent play. BETTER PRIME REQUISITESFirst, the Prime Requisite (PR) of each class serves as 1) an indication of free actions, for which no roll is needed 2) a value for difficult skill tests, and 3) a pool of points to be spent for story effects. For fighters, Strength is the PR, thus any minor action involving Strength does not require a roll but succeeds naturally. For instance, climbing a rope, lifting a barrel, or doing anything a strong person could requires no roll to succeed for a Fighter, whereas other classes would have to make a simple roll. Also, any Strength based action requiring a difficult test for other classes would only require a simple one for Fighters. For example, breaking down an iron door, holding onto a dragon's back, etc. Note that I use d20 resolution for simple tests and d100 for difficult ones, but other DMs are free to use their own system. Finally, a Fighter player can spend a point of Strength to earn a story effect. Note that this spending doesn't reduce the Strength value for tests, but instead is a limited pool of points that only regenerates after the adventure ends, even if it runs over several sessions, forcing players to spend points wisely. Players are encouraged to creatively narrate the effect, and DMs should refuse boring or unimaginative uses. For instance, a buy of 1 point could allow an unarmed fighter to bend farming tool into a sword for one encounter, or intimidate 1 NPC / monster, or hold onto a ceiling and stay out of sight as a too powerful foe passes. Other classes can do the same with their PRs. Dexterity for Thieves, Intelligence for Magic Users, and Wisdom for Clerics would offer similar benefits and opportunities to shape the story for players of these classes. Doing so uses the pre-existing attribute system without bloating the rules with ultimately limiting Feats or other subsystems, and instead offering players the chance to use their imagination to shape the story in ways that will surprise and entertain all at the gaming table. PS: If anyone wants to hear how running SotS went, drop me a line!
OPCFW_CODE
f you’re not a developer, you’re not going to understand Atlassian’s success. Atlassian employs no salespeople, yet it’s doing over $200 million in annual sales, according to a recent report in The Wall Street Journal. While enterprise software companies struggle to make their wares more consumer-friendly, Atlassian builds software that only a developer could love: It’s geeky, not super intuitive and frankly somewhat unpleasant to use for a business user like myself. Yet it’s now worth $3.3 billion. How’s that? Of The Developer, For The Developer Atlassian co-founder Scott Farquhar told The Wall Street Journal that “These days, people are making decisions based on how good the products are.” The definition of “good” may not be the same for developers as it is for the average business user, however. Wikis, issue tracking systems, Git code hosting, etc.—these are not tools your head of marketing really wants to use. I should know: Every time I have to fill out a JIRA request to get content changed on my company’s website, a little part of me dies inside. Then again, I’m not Atlassian’s target market. The developer is. And developers love Atlassian. In the world of developers, the definition of “ease of use” differs. This is a world that still thinks fondly on the command line. Even among this crowd, however, Twitter’s Chris Aniszczyk posits that Atlassian’s software may not be the best, but rather the best of a bad lot: I’ll take Chris’ word since I’m not much of a developer tools power user myself, but it’s his latter argument that I find so compelling: Atlassian succeeds, in part, because it treats its developer audience with serious respect. Giving Tribute To Developers While the first part of Ryan’s comment suggests Atlassian doesn’t deserve much credit, it’s the second half that really sets Atlassian apart. Developers don’t want unnecessary frills that get in the way of productivity. This same desire is what has driven GitHub, AWS and other developer-focused software to succeed. That group of tools developers love is a very small club. As it turns out, it’s very hard to develop tools a wide array of developers want to use. Not only does Atlassian support the things developers already do, but as Operational Results web developer Cody Nolden notes, Atlassian’s tools may actually expose problems in team workflows: They’re very configurable and can match whatever workflow your team uses. I’ve found that when I struggle to use Atlassian tools it’s because of more underlying struggles as a team not knowing what process we follow and we haven’t configured accordingly. Ultimately, Atlassian succeeds not because it’s the best tool among a bad bunch, but because it respects developers’ time and concerns. Tools like JIRA are intentionally not flashy. They’re utilitarian, not because Atlassian lacks creativity, but because the company cares more about what developers want than what marketing or sales or other groups within a company may want. This shows not only in the software itself, but also in how it’s sold: Atlassian is salesperson-free, over-the-web, and costs a reasonable amount of money. That’s a great strategy for appealing to developers.
OPCFW_CODE
/** * `Omit<P, K>` removes the keys in K from type T */ export type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>; /** * `inList` accepts a list of `values` and returns a function that accepts * a `value` and returns `true` if it is in the list * * @param values The values to allow */ export const inList = <A extends T[], T>(...values: A) => (value: T) => values.indexOf(value) > -1; /** * `inList` accepts a list of `values` and returns a function that accepts * a `value` and returns `true` if it is not in the list * * @param values The values to disallow */ export const notInList = <A extends T[], T>(...values: A) => (value: T) => values.indexOf(value) === -1; /** * `allowKeys` accepts a list of allowed `keys` of `P` and returns a function * that accepts a `key` and returns `true` if it is in the list * * @param values The keys to allow */ export const allowKeys = < P extends object = { [k: string]: any }, K extends keyof P = keyof P, KAll extends K[] = K[] >( ...keys: KAll ) => inList<KAll, keyof P>(...keys); /** * `disallowKeys` accepts a list of allowed `keys` of `P` and returns a function * that accepts a `key` and returns `true` if it is in the list * * @param values The keys to disallow */ export const disallowKeys = < P extends object = { [k: string]: any }, K extends keyof P = keyof P, KAll extends K[] = K[] >( ...keys: KAll ) => notInList<KAll, keyof P>(...keys); /** * Accepts a list of keys `K` of `P` to pick and returns a function that * accepts a `P` object and returns an object with the picked keys */ export const pickKeys = < P extends object, K extends keyof P = keyof P, KAll extends K[] = K[] >( ...keys: KAll ) => (props: P): Pick<P, K> => { const allowKeyFilter = allowKeys<P, K>(...keys); const o = {} as Pick<P, K>; const existingKeys = Object.keys(props) as Array<keyof P>; for (const key of existingKeys) { if (allowKeyFilter(key)) { o[key as K] = props[key as K]; } } return o; }; /** * Accepts a list of keys `K` of `P` to pluck/omit and returns a function that * accepts a `P` object and returns an object with the plucked keys removed */ export const pluckKeys = < P extends object, K extends keyof P = keyof P, KAll extends K[] = K[] >( ...keys: KAll ) => (props: P): Omit<P, K> => { const omitKeyFilter = disallowKeys<P, K>(...keys); const o = {} as Omit<P, K>; const existingKeys = Object.keys(props) as Array<keyof P>; for (const key of existingKeys) { if (!omitKeyFilter(key)) { o[key as Exclude<keyof P, K>] = props[key as Exclude<keyof P, K>]; } } return o; };
STACK_EDU
Performance / correctness issues with inner joins on columns of type FixedSizeBinary(16) I've attached two parquet files. Both files contain a single column with 131072 rows, generated from Arrow with a single record batch. The fsb16.parquet file contains a column of type FixedSizeBinary(16), the ints.parquet contains a column of type Int64. If I do an inner join, the query returns really quickly: ❯ create external table t0 stored as parquet location 'ints.parquet'; ❯ select * from t0 inner join t0 as t1 on t0.ints = t1.ints; +--------+--------+ ...[snip]... +--------+--------+ 131072 rows in set. Query took 0.530 seconds. Here is the plan for the int64 query: ❯ explain select * from t0 inner join t0 as t1 on t0.ints = t1.ints; +---------------+----------------------------------------------------------------------------------------------------------------------------------+ | plan_type | plan | +---------------+----------------------------------------------------------------------------------------------------------------------------------+ | logical_plan | Projection: t0.ints, t1.ints | | | Inner Join: t0.ints = t1.ints | | | TableScan: t0 projection=[ints] | | | SubqueryAlias: t1 | | | TableScan: t0 projection=[ints] | | physical_plan | ProjectionExec: expr=[ints@0 as ints, ints@1 as ints] | | | CoalesceBatchesExec: target_batch_size=8192 | | | HashJoinExec: mode=Partitioned, join_type=Inner, on=[(Column { name: "ints", index: 0 }, Column { name: "ints", index: 0 })] | | | CoalesceBatchesExec: target_batch_size=8192 | | | RepartitionExec: partitioning=Hash([Column { name: "ints", index: 0 }], 8), input_partitions=8 | | | RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1 | | | ParquetExec: limit=None, partitions={1 group: [[Users/max/src/ul/services/ulv2/ints.parquet]]}, projection=[ints] | | | CoalesceBatchesExec: target_batch_size=8192 | | | RepartitionExec: partitioning=Hash([Column { name: "ints", index: 0 }], 8), input_partitions=8 | | | RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1 | | | ParquetExec: limit=None, partitions={1 group: [[Users/max/src/ul/services/ulv2/ints.parquet]]}, projection=[ints] | | | | +---------------+----------------------------------------------------------------------------------------------------------------------------------+ But if I do the same with the FixedSizeBinary(16) file, it takes a very long time, runs up a huge working set (seeing 170GB+ on my computer), and takes a long time. In much of my testing it runs out of memory and dies, but if it finishes it takes ~6 minutes (compared to 0.5s with the int64 columns) ❯ create external table t0 stored as parquet location 'fsb16.parquet'; ❯ select * from t0 inner join t0 as t1 on t0.journey_id = t1.journey_id; +----------------------------------+----------------------------------+ ...[snip]... +----------------------------------+----------------------------------+ 358946 rows in set. Query took 356.370 seconds. Also, I think the results are wrong; the result set should only have 131072 rows, not 358946 And the FixedSizeBinary(16) query plan: ❯ explain select * from t0 inner join t0 as t1 on t0.journey_id = t1.journey_id; +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | plan_type | plan | +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | logical_plan | Projection: t0.journey_id, t1.journey_id | | | Inner Join: Filter: t0.journey_id = t1.journey_id | | | TableScan: t0 projection=[journey_id] | | | SubqueryAlias: t1 | | | TableScan: t0 projection=[journey_id] | | physical_plan | ProjectionExec: expr=[journey_id@0 as journey_id, journey_id@1 as journey_id] | | | RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1 | | | NestedLoopJoinExec: join_type=Inner, filter=BinaryExpr { left: Column { name: "journey_id", index: 0 }, op: Eq, right: Column { name: "journey_id", index: 1 } } | | | ParquetExec: limit=None, partitions={1 group: [[Users/max/src/ul/services/ulv2/1677623589235.parquet]]}, projection=[journey_id] | | | ParquetExec: limit=None, partitions={1 group: [[Users/max/src/ul/services/ulv2/1677623589235.parquet]]}, projection=[journey_id] | | | | +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ fsb16.parquet.gz ints.parquet.gz One thing I should mention is that I am testing with this patch applied to Arrow because otherwise it's significantly slower in the FixedSizeBinary(16) case: https://github.com/apache/arrow-rs/pull/3793 Performance issue fixed with #5461
GITHUB_ARCHIVE
Account Owners and Project Managers have several different options for filtering strings in the list view: (1) Filter by content type - Awaiting Authorization: Content that has been captured but not authorized for translation. - In Progress: Content that has been authorized but not yet published. - Completed: Content that has been published. - Excluded: Content that has been excluded from translation. (2) Filter by status - Pending assignment: Translations are authorized but still need to be assigned to translators - Available for pre-publishing: Translations can be pre-published but will remain in the workflow until all workflow steps are complete. - Not Translated: No translation is saved for the string. - Translated: A translation is saved for the string. - Unresolved issues: Displays all strings that still have open issues or questions. - Translation Same As Source - A translation has been saved for the string, which exactly matches the source string.' (3) Filter by workflow/step Allows you to show only strings belonging to a workflow or steps within a workflow. You can select a whole workflow with a checkbox, or any number of individual steps. Note: If your project has only one workflow, you see a simplified drop-down filter showing the steps of your workflow. (4) History Search Search for strings by their history. Select an action, a date range and, optionally, a user to find matching strings. Available actions may include: - Content Assigned - A string was assigned to a resource. This applies to workflow steps where the Assignment feature is enabled. - Content Authorized - A string was authorized for translation. - Content Moved - A string was moved from one workflow or workflow step to another. - Translation Submitted - A string was submitted from the translation step to the next step in a workflow. - Edit Submitted - A string was submitted from an editing step to the next step in a workflow. - Review Submitted - A string was submitted from a review step to the next step in a workflow. - Content Unauthorized - A previously authorized string was unauthorized (shown in Awaiting Authorization view only). For example, this search returns all strings authorized by the user 'Account Owner' in the last 7 days. (5) Keyword Search Search both original source string and translations for keywords. Smartling searches are not case sensitive and do not support search operators. (6) Filter by string properties Click the down arrow next to the keyword search. - URL: Filter by URL to find content within a particular file or from a particular link on your website. When using the URL filter, Smartling may display search results from other URLs if that content is shared. In a GDN project, Smartling associates the first URL from which content is captured with that string. If the same string appears on other URLs (e.g. navigation) it will only be associated with one single URL. For Files projects, URLs may be the file name or may be the name of the image context file if one has been applied. - File Name: Search for all strings belonging to a particular file. This filter works similarly to the URL search, but will always show all strings belonging to a particular file, even if image context has caused the strings to have different URLs. - Key/Variant: If the string is a variant use this filter to look up the Translations by variant or key metadata. - Context: Finds strings that have or do not have visual context. This filter appears for files projects only. - Search by Translation Resource: Find strings currently assigned to a specific Translation Resource. - Job: Find strings that are part of a specific Job, or have no Job assigned. - Search by Domain: If you have multiple domains setup in your project, filter content by domain. - Search by Tag: Filter for translations by tags. - Active/Inactive Strings: Show strings which are active, inactive or both. Use the toggle buttons to turn context thumbnails on and off. Click the gear wheel to choose to show string context and workflow step name against each string in the list view. Note: if you choose ‘Show Workflow Step Name’, you can also instantly set the filter to the workflow step of any string by clicking the step name:
OPCFW_CODE
Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the previous week. This post covers 2021-12-20 to 2022-01-03. - China suspends deal with Alibaba for not sharing Log4j 0-day first with the government. Note this isn't as bad as the headline makes it seems, as China only suspended a "cooperative partnership... regarding cybersecurity threats and information-sharing platforms." Regardless, it sends a clear message. If you find a vulnerability in China, you'd better tell the government about it before anyone else. - ZeroPeril Deep dive into executable packers & malware unpacking Training Course Announcement. New fully remote training that uses x86/x64dbg. Training is fully remote (Teams). - How did LastPass master passwords get compromised?. A number of users received emails that their master password had correctly been used from a suspicious location, even after changing it. Is this an email error or something deeper? Either way, not a good look for LastPass, which has already lost credibility. - In 2022, YYMMDDhhmm formatted times exceed signed int range, breaking Microsoft services. Duct tape and glue. It's all just duct tape and glue. - Android Application Testing Using Windows 11 and Windows Subsystem for Android. You've heard of the Windows subsystem for Linux, but how about the Windows subsystem for Andrid? Now you can use your favorite mobile assessment tools like objection and Burp suite without needing a real android device! - Hopper Disassembler. This post shows how to use Hopper to bypass simple jailbreak detection by modifying a single jump instruction. Sometimes it is that simple, but the trick is knowing which byte to change. - MS Teams: 1 feature, 4 vulnerabilities. None of these are severe, but some are simple issues that you wouldn't expect a market leader in connectivity to be making. - Attacks on Wireless Coexistence: Exploiting Cross-Technology Performance Features for Inter-Chip Privilege Escalation (PDF). System on a Chip (SoC) designs can include multiple wireless technologies with shared components. This overlap can lead to one compromised protocol being able to read or edit data on another medium via the shared resources. - How to exploit Log4j vulnerabilities in VMWare vCenter. Unauthenticated remote code execution as root against vCenter via Log4j. The post covers good post-exploitation options and even drops the PoC: Log4jCenter. - Where's the Interpreter!? (CVE-2021-30853). This dead-simple Gatekeeper bypass makes you wonder what other silly tricks are out there. Patrick doesn't stop at the PoC and dives deep into the root cause of this bug. Notably this fix is absent for Catalina (10.15.7), however my very limited testing indicates it may not be vulnerable. - A Deep Dive into DoubleFeature, Equation Group’s Post-Exploitation Dashboard. If you're interested in what "real" APT malware looks like, this long post covers a lot of tools. - Remote Process Enumeration with WTS Set of Windows APIs. With the proper privileges you can get a remote process list using standard Windows APIs. This would be a nice tool to avoid machines with EDR or other programs running. - CVE-2021-31956 vulnerability analysis (Chinese). This post explores CVE-2021-31956, a local privilege escalation within Windows due to a kernel memory corruption bug which was patched within the June 2021 Patch Tuesday and contains actual exploit code. - HyperGuard – Secure Kernel Patch Guard: Part 1 – SKPG Initialization - Dumping LSASS with Duplicated Handles. Rastamouse walks through how to use duplicated handles to dump LSASS which builds on his previous post on enumerating and duplicating handles. It still dumps to disk, so a pure in-memory implementation will get you even more evasion points. - Another Log4j on the fire: Unifi. Another great walkthrough on how to go from login page to backdoored appliance from Nicholas at Sprocket Security. 67,000 exposed instances on shodan... RIP in peace. - Phishing With Spoofed Cloud Attachments. "Abuse the way O365 Outlook renders cloud attachments to make malicious executable cloud attachments look like harmless files." This is phishing gold. Paired with a nice sandbox aware firewall/redirector it will likely yield success with a simple docuement.pdf.exe payload because the mail looks so good. - Edition 14: To WAF or not to WAF Effectiveness of WAFs are a hotly debated subject in AppSec circles. This post tries to bring a structure to that discussion. Tools and Exploits - KaynLdr is a Reflective Loader written in C / ASM. It uses direct syscalls to allocate virtual memory as RW and changes it to RX. It erases the DOS and NT Headers to make it look less suspicious in memory. - WMEye is a post exploitation tool that uses WMI Event Filter and MSBuild Execution for lateral movement. - hayabusa is a sigma-based threat hunting and fast forensics timeline generator for Windows event logs. Reminds me of chainsaw. - Tool Release – shouganaiyo-loader: A Tool to Force JVM Attaches. This loader forces Java agents to be loaded and can inject Java or JVMTI agents into Java processes (Sun/Oracle HotSpot or OpenJ9). - Invoke-Bof loads any Beacon Object File using Powershell! - Inject_Dylib is Swift code to programmatically perform dylib injection. New to Me This section is for news, techniques, and tools that weren't released last week but are new to me. Perhaps you missed them too! - Pentest Collaboration Framework is an open source, cross-platform, and portable toolkit for automating routine processes when carrying out vulnerability testing. - Registry-Spy is a cross-platform registry browser for raw Windows registry files written in Python. - iptable_evil is a very specific backdoor for iptables that allows all packets with the evil bit set, no matter the firewall rules. While this specific implementation is modeled on a joke RFC, the code could easily be modified to be more stealthy/useful. - Narthex is a modular & minimal dictionary generator for Unix and Unix-like operating system written in C and Shell. It contains autonomous Unix-style programs for the creation of personalized dictionaries that can be used for password recovery & security assessments. - whatfiles is a Linux utility that logs what files another program reads/writes/creates/deletes on your system. It traces any new processes and threads that are created by the targeted process as well. - The HatSploit Framework is a modular penetration testing platform that enables you to write, test, and execute exploit code. - TokenUniverse is an advanced tool for working with access tokens and Windows security policy. - LACheck is a multithreaded C# .NET assembly local administrative privilege enumeration. That's underselling it though, this has lots of cool enumeration capabilities such as remote EDR driver enumeration. - Desktop environment in the browser. This is just... wow. Code here: daedalOS. Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing. This post is cross-posted on SIXGEN's blog.
OPCFW_CODE
|Subject||Re: [firebird-support] Slowness| >gsndelphicoder wrote:It indicates a server crash. Both the frequent crashing AND the poor > >... I looked at the server log through IBConsole and it > >was riddled with the following message: > >GEMTRAK (Server) Tue Jan 10 09:11:40 2006 > > SERVER/process_packet: broken port, server exiting > >Can anyone tell me what might be causing this error and if it is > >likely the root of my slowness issues? performance are MOST likely to be symptoms of unfavourable application code and inappropriate indexing. At 02:40 PM 11/01/2006 -0200, Ivan Cruz wrote: > >Not obviously hardware faults in this case. A broken port can mean a >That error is related to the most likely cause: a network problem. >Try changing cables and switch ports. Since everybody is >experiencing slowness, start looking on server side of your network configuration fault, e.g. that the server's port (default port 3050) has been hijacked by another application (InterBase?) or that the tcp/ip service has been compromised (faulty DHCP configuration?) However, in such cases, it's fairly uncommon to get a useful (i.e. server-generated or guardian-generated) message in the log. You'd normally just see the network error. So it's useful that you are getting these messages from the Guardian. Since you say the log is "riddled" with this very message, it's likely that your application code is causing the server to crash. The first place to triage this problem will be calls to bad UDFs, either those written by yourself or those written by third parties. When a UDF throws an unhandled exception, it will crash the server. If you are using third-party UDF libraries, examine your declarations of UDFs for situations where CSTRING arguments are being passed and FREE_IT is not being invoked. I've seen third-party UDF libraries where the provided SQL declarations omit this... However, from your nickname, one supposes that you are writing applications in Delphi. Therein lies a major trap. The default settings of many data access component sets, notably those supplied by Borland (VCL/BDE, DBX and IBX) will kill servers. It's essential to understand what's going on at the server when you use these defaults, viz. transactions running in so-called "Autocommit" mode and SELECT statements running interminably in a read-write transaction...in short, memory resources will become exhausted and the database itself will become overloaded with garbage that can't be cleaned up. Depending on the volume of operations, the crash will take from hours to days to occur, but occur it will. Another trap for Delphi developers using Enterprise editions is that your default "install-everything" for Delphi includes installing whatever version of InterBase was shipped on the CD. This includes installing an incompatible client (gds32.dll) in the system path of your development server, hijacking port 3050 and starting up InterBase automatically at boot-up. Delphi developers have been known to propagate this problem by deploying the wrong client library into production, as well... Add to that the possibility that multiple local application instances may be accessing the database through the IPServer protocol ("Windows local protocol", which Borland's components graciously make the default) and are stamping all over the server's memory, and you have an ongoing support If you graduated from a desktop database like Paradox or Access, it's possible you think table components are an ideal way to access all database engines. NOT. Table components are for desktop database engines. Networked database engines require SQL and, invariably, serious attention to any indexes that the database carried as cargo from the So the Delphi side of things can be a can of worms for performance and stability. If you are only beginning to appreciate the issues surrounding the multi-generational architecture and a transactional DB engine, then the application code is quite likely to be in need of serious review.
OPCFW_CODE
using System.Linq; using Assets.Scripts.Interactive.Abstract; using UnityEngine; namespace Assets.Scripts.Interactive { public class Table : Usable { public bool HasBigPlate { get { return BigPlatePosition.GetComponentInChildren<BigPlate>() != null; } } public Seat[] Seats; public GameObject BigPlatePosition; public GameObject FinishedPlatePosition; public override void Interact() { TakeOrder(); } public override void Interact(GameObject obj) { var plate = obj.GetComponent<Plate>(); if (plate != null) Serve(plate); var bigPlate = obj.GetComponent<BigPlate>(); if (bigPlate != null) ServeBigPlate(bigPlate); } private void Serve(Plate plate) { if (HasBigPlate) return; var seatToServe = FindSeatWithoutPlate(); if (seatToServe == null) return; seatToServe.Serve(plate); } private void ServeBigPlate(BigPlate bigPlate) { if (HasBigPlate) return; var newPosition = new Vector3(); var catchableCollider = bigPlate.GetComponent<BoxCollider>(); if (catchableCollider != null) newPosition.y = catchableCollider.size.y / 2; bigPlate.Catch(BigPlatePosition, newPosition, new Quaternion()); bigPlate.CanBeCaught = false; foreach (var seat in Seats) { if (seat.HasClient) seat.Client.Eat(bigPlate); } } public void AddPlateToFinishPosition(Dish dish) { var plate = dish.GetComponent<Plate>(); if (plate == null) { dish.CanBeCaught = true; return; } var plateInFinisgPosition = FinishedPlatePosition.GetComponentInChildren<Plate>(); if (plateInFinisgPosition != null) { plateInFinisgPosition.Stack(plate); } else { plate.CanBeCaught = true; var newPosition = new Vector3(); var catchableCollider = plate.GetComponent<BoxCollider>(); if (catchableCollider != null) newPosition.y = catchableCollider.size.y / 2; plate.Catch(FinishedPlatePosition, newPosition, new Quaternion()); } } private void TakeOrder() { Debug.Log("Take an order"); } private Seat FindUnoccupiedSeat() { return Seats.FirstOrDefault(seat => !seat.HasClient); } private Seat FindSeatWithoutPlate() { return Seats.FirstOrDefault(seat => seat.HasClient && !seat.HasPlate); } } }
STACK_EDU
Main / Cards & Casino / Ets 2 multiplayer torent tpb Ets 2 multiplayer torent tpb Name: Ets 2 multiplayer torent tpb File size: 8mb In this case, we recommend that you download Euro Truck Simulator 2 torrent. And our gaming site, and this page directly, will help to do it quickly, reliably. 26 Aug Euro Truck Simulator 2 Free Download PC Game Cracked in Direct Link and Torrent. Euro Truck Simulator 2 is a vehicle simulation game. Euro Truck Simulator 2 PC Game Overview. 13 Oct Euro Truck Simulator 2 is that rare thing, a strong sim tethered to a strong game. Where other vehicle-obsessed devs seem to take player. Euro Truck Simulator 2 Download Free; Free Download Euro Truck Simulator 2 Torrent; Euro Truck Simulator 2 PC Download; ETS 2 Download Free; Free. Downloads for Euro Truck Simulator 2. Get behind the steering wheel of a Torrent For advanced users [~ mb], Local link. Local download link Slow [~ . Euro Truck Simulator 2 32bitbit free download torrent. Posted on March 8, by kyulee A major expansion of services in addition to Euro Truck Simulator 2. Make your way through the vast bulwarymiast Platform: PC. Engine: home. ETS2. s As you can guess, a multiplayer mod is a bit more advanced than replacing a few textures and sound files. Because we have more access to. 7 May euro truck simulator 2 free download torrent tpb - AWESOME FIX I Incoming search terms euro truck simulator 2 multiplayer download ets 2. download euro truck simulator 2 torent tpb full version. Euro Truck Simulator 2 Full Game Pc. I reposted your own 2 torrents upon. Euro Truck Simulator 2 Full. ETS2 Update New Scania 8x4. As already mentioned in the Euro Truck Simulator 2 Update feature list, and no doubt witnessed by most of the players. Against all expectations, Euro Truck Simulator 2 download is a pleasant surprise. Overcoming almost all of the shortcomings of mediocre first part ETS2 free. 24 Oct Euro Truck Simulator 2 Download Free PC version game setup single link. Download .. Ford Racing 2 PC Game A4 download torrent TPB. likes. you can download pc games easyliey in this website. February 27 ·. africanpremieradventures.com .. Download Euro Truck Simulator 2 [v s + 29 DLC] () [R.G. Mechanics] Torrent - Kickass. 25 Jun Click to download: Download euro truck simulator torrent tpb pirate Euro Truck Simulator 2 pentru PC, PC CD Key. http://thepiratebay. 29 Jan Piracy, through torrent sites or any other avenue, is never a good thing, but what the illegal sharing of The Elder Scrolls V: Skyrim; The Sims 4; Euro Truck Simulator 2; Farming Simulator 15; Dying Light The Pirate Bay.
OPCFW_CODE
This is a project funded by the National Science Foundation (NSF) under the Design of Engineering Material Systems (DEMS) program and is a collaborative project between Dr. Daniel Selva and Dr. Meredith Silberstein of Cornell University. Roshan Suresh Kumar is the graduate student from SEAK Lab working on this project. The main objective of the project is the development of a design method for 3D printable elastomeric metamaterials that leverages both expert knowledge and data (Figure 1). Metamaterials are materials that are constituted of repeated lattice unit cells. The geometry of these lattice cells directly affect the mechanical properties of the macro material. Thus, the design method will focus on the optimization of the lattice design to achieve desired mechanical properties such as high tensile strength, low volume fraction etc. Since the inverse design problem is extremely complex, involving combinatorial design decisions, non-convex, nonlinear mapping to the objectives and multiple hard-to-satisfy constraints, the employment of expert knowledge into the optimization framework will greatly improve the efficacy of the design search. Usually, methods like topology optimization using expensive evaluation models such as FEA simulations have been utilized for these design problems. The use of surrogate models is also severely limited due to the expense of training using high fidelity models and poor performance of such models in certain regions of the design space. Expert knowledge is available in many different forms (such as physics-based models and design heuristics) and represents prior tested knowledge that can prove useful for the design problem. Moreover, the incorporation of these heuristics into the design optimization problem has not been extensively studied. In lieu of this, the specific intellectual contributions of this work are as follows: - Guidelines about when and how it makes sense to use expert knowledge for materials design - Strategies to effectively incorporate different types of knowledge into the design method in combination with data-driven approaches - New materials designed through this framework - A knowledge base for design of mechanical metamaterials This research approach harnesses the expertise of the Selva group in engineering design, optimization, machine learning and knowledge-based systems and the expertise of the Silberstein group in polymer modelling and experimental characterization. The results from the utilization of design heuristics for a multiobjective mechanical metamaterial design optimization problem were published and presented at IDETC/CIE 2021 in the paper titled “Leveraging Design Heuristics for Multi-objective Metamaterial Design Optimization.” In the paper, the challenges in identifying and leveraging the promising design heuristics for a given design optimization problem were studied through a simple class of 2D multiobjective metamaterial design optimization problems. The design space was a 2D 3×3 node grid unit cell with 30 binary design decisions representing the presence or absence of truss members within the node grid. Unit cell repeatability in both orthogonal directions was enforced by emulating the design decisions for opposite edge members. An example design in its repeated configuration is shown in Figure 2. The objectives were maximization of vertical stiffness (C11) and minimization of unit cell volume fraction subject to three constraints. - The feasibility constraint dictates that designs cannot have any intersecting or overlapping members. This constraint aims to help create designs that are realizable. - The connectivity constraint enforces the presence of at least two connections for each used node and is related to mechanical stability. - The final constraint drives the design stiffness ratio C22/C11towards a user-defined target value. Four candidate design heuristics are considered for this problem that mimic the heuristics accumulated from a lessons learned database: - The partial collapsibility heuristic is associated with shear stability and checks for the presence of diagonal members in the four rectangular sub-regions within the unit cell. - The nodal properties heuristic helps in satisfaction of the connectivity constraint and encourages designs to have at least three connections for all used nodes and limits the number of unused nodes to 1. - The orientation heuristic is aimed at satisfaction of the stiffness ratio constraint and directs designs to achieve a target average orientation based on the user-defined target stiffness ratio. - The intersection heuristic helps satisfy the feasibility constraint and limits the number of intersections between design members. Design heuristics can be represented in different ways and the methods to enforce and leverage the heuristics in the optimization framework (called heuristic handling methods) vary based on the heuristic representations. A selection of these representations and their corresponding handling methods are shown in Figure 3. - The soft constraint representation is a function that maps the design space to the real number space quantifying the degree of satisfaction of the heuristic (e.g., 0 indicating no satisfaction whatsoever and 1 indicating full satisfaction). The heuristic soft constraint form can be incorporated into the optimization framework using either penalty methods like Interior Penalty or stochastic constraint enforcement methods like Disjunctive Normal Form or Adaptive Constraint Handling. - The repair operator form of a heuristic manipulates an input design to create another design that satisfies the heuristic to a greater extent. For example, the partial collapsibility repair operator adds a diagonal member at random to a design, which likely reduces the risk of collapsibility. The repair operator form of a heuristic can be leveraged either through a fixed operator selection method (which repeatedly enforces the repair operator throughout the optimization run) or through an Adaptive Operator Selection (AOS) method that maintains a pool of knowledge-dependent and knowledge-independent operators, assigns credits to operators based on their cumulative performance and probabilistically selects the operator to apply at different stages of the optimization process based on their credits. - The biased prior probability distribution form samples a population of designs biased towards the satisfaction of a heuristic. For example, the partial collapsibility biased distribution form is a population of designs biased to have more diagonal members. The biased distribution form of a heuristic can be used as the initial population for an optimization algorithm. To identify the promising heuristics for a particular design problem, certain metrics are introduced in the paper. These metrics quantify the ease of satisfaction of a heuristic and its targeted quantity and the correlation between the degree of satisfaction of a heuristic and being close to the true or penalized Pareto Front and the degree of satisfaction of a constraint. These metrics utilize the soft constraint forms of the heuristics and can be computed on either randomly generated designs or a combination of random designs and “good designs” obtained by running the optimization algorithm for a few iterations. These metrics include correlation coefficients (Pearson’s or Spearman’s) and interestingness measures from Association Rule Mining theory (e.g., support, confidence and lift). 10 trials of 400 designs each (100 randomly generated and 300 from an 𝜖-MOEA run on the penalized objectives without enforcing any heuristics) were used to validate our metrics to identify promising heuristics in the metamaterial design problem. It is found using this method that orientation and intersection are the promising heuristics whereas partial collapsibility and nodal properties are not promising. To show that the heuristics identified as promising can indeed help accelerate the optimization, a experimental study was done consisting of four cases: - Case 1: No heuristics enforced – control baseline. - Case 2: Orientation – Repair operator form handled using AOS and Intersection – Repair operator and biased distribution forms handled using AOS and biased initialization respectively - Case 3: Orientation – Repair operator form handled using AOS - Case 4: Partial Collapsibility and Nodal Properties – both repair operator forms handled using AOS. This is developed as a negative benchmark since these were identified as non-promising heuristics. For each case 30 𝜖-MOEA runs were performed with a population size of 100 and termination criteria of 3000 function evaluations. Figure 4 shows a quad chart with the feasible designs in the combined Pareto Front of all runs for all cases at different stages of the optimization process. It can be observed that Case 2, which leverages both the promising heuristics, is able to obtain feasible designs in the Pareto Front first. Figure 5 shows the plot of the hypervolume over the penalized objectives for all cases. Since Case 2 is the first to find feasible designs in the Pareto Front, it is also the first case to see a sharp rise in the hypervolume. Using the heuristics as in Case 2 saves 400 function evaluations compared to not using them. Figure 6 shows the fraction of runs for all cases to reach a threshold hypervolume of 0.75 at different stages of the optimization process. It is evident that not only does Case 2 reach the optimal design space the fastest but is also able to hold the advantage over the other cases throughout the optimization. In contrast, Case 4 which only leverages the non-promising heuristics performs worse than Case 1 where no heuristics are enforced. Based on these results, certain guidelines for designers to choose promising heuristics were identified and presented: - The most useful heuristics are those that are aligned with Pareto dominance in the penalized objective space, which takes into account both objectives and constraints. In other words, a promising heuristic must be aligned with either Pareto dominance in the true objective space or with any of the constraints. - Heuristics can be aligned with Pareto dominance only in certain regions of the design space. In this case, the heuristics must be selectively leveraged based on the current status of the optimization. - Heuristics are typically useful mostly in the beginning of the search since they are, by definition, only directives. Therefore, over-enforcement of heuristics in the later stages of the optimization process may lead to reduced diversity. Hence, design heuristics should be eventually shut off. A journal paper is in preparation that tests the efficacy of the metrics on two variations of an Earth Observation satellite design problem and two metamaterial design problems. Two different metrics are being developed to ascertain the promising heuristics in their soft constraint and operator forms respectively. A heuristics taxonomy is also being developed that categorizes heuristics based on their ease of satisfaction and alignment with the objectives and/or constraints. The output of the taxonomy would be a recommendation of heuristic representations and handling methods based on the perceived requirement of degree of enforcement of the heuristic. A manuscript titled “Examining the impact of asymmetry in lattice-based mechanical metamaterials” was published in the Mechanics of Materials journal in June 2022. In this work, a generative design process is used to generate datasets of lattices with varying degrees of symmetry which are compared in terms of the presence of certain desirable properties such as negative Poisson’s ratio. Some key design features are identified which show significant correlations with the desirable properties. Kumar, Roshan S., Srikar Srivasta, Meredith N. Silberstein, and Daniel Selva. “Leveraging Design Heuristics for Multi-Objective Metamaterial Design Optimization.” IDETC/CIE2021 (2021). Srivatsa, Srikar, Roshan Suresh Kumar, Daniel Selva, and Meredith N. Silberstein. “Examining the impact of asymmetry in lattice-based mechanical metamaterials.” Mechanics of Materials (2022): 104386.
OPCFW_CODE
I tend to make a lot of mistakes when running R. The two most common sources of error are typos and a mistake in the syntax of a command. Sometimes the command worked, but the variable has the wrong name or the data in it is bad. Here I have collected a series of operations to help correct errors and back up data. At the bottom of the page I also explain how to add modules to R that will increase its usefulness to us. I. Backing up your data. Backing up your data, step 1: the Workspace First of all, set your working-directory (Session > Set Working Directory > Choose…). Second, check what R understands to be its current working directory: Now you know where your saved data will go. In the Workspace Pane, click the little blue floppy-disk icon: This will open a new dialog box where you can choose the path (directory-location) and name of your file. Note my geeky protocol: I name my file with the convention “WrkSpace_YYMMDD” and by default the software adds “.RData” as the suffix. Next time you start R-Studio, do it by clicking on this file. Then R-Studio will load the whole workspace, including the dataframes, variables, and other objects you have imported and created. Backing up your data, step 2: export your dataframe as a CSV file. If you follow the exercises in these web-pages, you will add quite a bit of data to your main dataframe. You can back it up by using the write.csv() function to export the whole dataframe as a CSV file. This is also useful because you can open that CSV with a spreadsheet program, copy selected data from it, and paste it as a table into a word-processing document–such as your final report. # Save a CSV of the updated dataframe "CO": write.csv(CO, "CO_141123.csv") R-Studio will save this file to your current working directory. If you are not sure which is your current working directory, use getwd() to find out, and that way you can locate your exported CSV file. Backing up your data, step 3: save your History I normally call the upper-right-hand-pane the Workspace Pane in R-Studio. But it also has a History tab. If you click on that tab, you will see a log of all the commands you have entered into R. The Console Pane (lower left) is also aware of this history; if you have the cursor active in the Console Pane, and press the Up arrow on your keyboard, you will see all the previously-entered commands appear. (I am digressing a bit now, but notice that this history-keeping function in R is really useful if you want to re-run a command or re-run a slightly-edited version of the command.) It is nice to back up this complete record of all your entered commands: The History file will be saved to the current Working Directory, with an automatic “.RHistory” suffix. It is a text file. I suggest naming it with the YYMMDD prefix, so on this day it would be named: 141123.RHistory Backing up your data, step 4: Save your Script Sheet You can also save scripts of commands in the upper-left window. Unlike the History, which is a verbatim record of every entered command (including view-refresh commands), the Script is your own page where you can keep and annotate the commands that have worked for you. In this case, I have a sheet that I have been working on for several weeks, and R-Studio just overwrites the same named file. R-Studio automatically appends the “.r” suffix to this file. Backing up your data, final note: R-Studio’s own “.RData” backup When you quit R-Studio, it also asks if you want to save the workspace. I think this automatic save may be redundant, but it saves an .RData file in the directory where the software is installed (at least it does on my Linux system). I think that, at minimum, this will preserve your preferences. Not sure what else it saves, so I would not rely on it for saving all your work. With all your stuff backed up, now you can move on to… II. Cleaning up your Workspace Removing an object from your R Workspace Now that you have baked up your data, you can remove stuff that was made by mistake, like a mis-named variable. The basic syntax is: rm(object) if you name specific objects each time you run this command, you will not accidentally erase all the data in your workspace. Deleting a specific column of data in a dataframe Sometimes you make a mistake when adding a column of data (also called a variable) to a dataframe. How do you selectively delete data within a dataframe? Set the values to NULL. # Remove mistaken variable "HiDens" from dataframe CO: CO$HiDens = NULL Renaming a specific column within a dataframe Sometimes you like the data you created, but not the name you gave to the column. # Rename the variable "ThrLat" within dataframe "CO" to become "hiloLatTr": names(CO)[names(CO)=="ThrLat"] <-"hiloLatTr" The syntax of this command is pretty convoluted. The basic function is: names(DF)[col#] <- “newcolumnname” …and if you know the number of the column you are going to rename, you can just put the number in; but we are using a dataframe with more than 40 columns. Rather than try to find the column name, we can insert the following subcommand into the middle of the main command: This subcommand makes R aware that “oldcolumnname” is the set of data to be modified, within the dataframe “DF”. There is an easier command to rename columns within a dataframe: rename.vars(DF, from=”oldcolumname”, to=”newcolumnname”) …which has a much more intuitive syntax. However, it is not available in the basic default installation of R and R-Studio. It is in the add-on package called gdata. This is an appropriate moment to point out that you can… III. Soup-up R: add packages to add functionality Since R is an open-source project, many people create packages that add functions to R. In May of 2014, for example, Seong-Yun Hong published (i.e. uploaded) the R package “seg” which includes the five equations Massey & Denton (1988) described as the various dimensions of segregation. Make sure your system has an active internet connection, and then type the following command in the Console: Also, as mentioned above, there is an easier command for renaming variables, available in the gdata package. Install it (with an internet connection) by entering: In the install-feedback, I noticed that this also installed two more very useful commands: read.xls() and read.xlsx(), which means you could import straight from an Excel spreadsheet. I like to activate these and several more built-in packages in R-Studio. In the lower-right View Pane, switch to the Packages tab and click the checkboxes shown: Before you install gdata package, if you try to use the rename.vars() function, R will respond with a digital shoulder-shrug: > rename.vars(CO, from="ThrDens", to="hiloDnsTr") Error: could not find function "rename.vars" With the gdata package installed, it runs without complaint.
OPCFW_CODE
November 30 marked the International Day of information. Holiday appeared almost 30 years ago – in 1988, when it was first recorded mass epidemic of “worm” Morris. Mitapy security we hold regularly, and today we can afford to do without another announcement (just watch out for events in the blog). Stir all who in any way associated with information protection, and recall the methods of information security protection will top the main vulnerabilities 2015 The year began wondering. Not had time to cool off from the community news error HeartBleed, which, perhaps, was the largest in the history of vulnerability information, how to identify a comparable scale vulnerability, which received the code nameGHOST. Critical hole was found in the system library Glibc and manifested itself in the processing of a specially designed data functions gethostbyname () and gethostbyname2 (), which are used in many programs to convert the host name in the IP-address. The problem touched the 7 Debian, Red Hat Enterprise Linux 6 and 7, CentOS 6 and 7, Ubuntu 10.04 and 12.04, SUSE Linux Enterprise 10 and 11. Interestingly, there is a bug in the code since 2000 and was eliminated in May 2013 but no indication that the vulnerability can have serious consequences. As a result, a huge amount of distributions simply ignored updated to stable version of the package. An ancient evil has awakened Spring in the server and client implementations of TLS / SSL discovered a critical vulnerability, called FREAK. She touched on Android devices and the browser Safari. Dangers are subject, including sites that use the technology SSL. The most amazing thing is that this vulnerability for many years. Until 1999, the United States prohibits the export of devices with strong cryptographic protection. To get around this restriction, companies had to incorporate a little protection, effectively leaving the future open door to hackers SSL. A matter of time was to reveal a way to spend man-in-the-middle-attack and force the client to use TLS-sensitive codes from the server. Breaking these codes takes only a few hours, because they are based on the encryption key of 512 bits. Regular column “The vulnerability in Flash» July 14, Adobe has released updates to Flash Player, which closes a critical vulnerability that allowed you to take remote control of the system in Windows, Linux and OS X, covertly setting cipher files CryptoWall 3.0. Due to the found vulnerability could execute code in almost all of the existing browsers. The whole year has passed under the slogan “Let’s bury have Flash». Chief Information Security Facebook Alex Stamos called Adobe permanently close Flash. A company Recorded Future has conducted a study which addressed vulnerabilities in popular exploit kits. Of the ten major exploits vulnerabilities found in eight focus on plug-in Flash. Remote car burglary In July, at the Defcon conference in 2015 told the six vulnerabilities found in the Tesla Model S, with which it was possible to hack a car. However, this will still require access to the machine. Tesla quickly released a fresh update. In the same month, the IB-Schnick in collaboration with Wired magazine do break-Cherokee Jeep. Due to the vulnerability of the system vehicle Uconnnect «white hackers” remote access to multimedia system, wipers and air conditioning. Following fell Protection Steering, ultimately, to disable the brakes. At the same time managed to hack the whole system remotely. Bug of hiding in the dongle inserted into the diagnostic port on-board computer. These devices measure the efficiency of fuel consumption and distance traveled. In February of this year, vulnerability was found in the infotainment system ConnectedDrive car the BMW. The researchers conducted the attack by creating a fake base station. Using the substitution of network traffic, rolled down the window and managed to open the door, but do not start the engine. 95% of vulnerable users It is no secret that there is a direct link between the popular and the number of technologies implemented at her attacks. In July (summer hackers hot season) suddenly it turned out that nearly a billion Android-devices vulnerable to remote access to them through the MMS. Built on the Android library for handling media files of various formats contain bugs that allow to infect 95% Android-devices. Fortunately, Google quickly released an update axis. Unfortunately, the old devices by this update do not fall. IOS Hacking 9 In November, it is not known who is not known how it was possible to hack iOS 9. This was stated by the company Zerodium, search and sale of vulnerabilities. The company held a contest, which required participants to find and exploit a flaw in Safari or Chrome. As a result, a group of hackers called not received $ 1 million for an exploit that allows you to install arbitrary software on devices running iOS 9. Encryption in the trend Software encrypts files, it’s not easy to users to Linux, as the site administrator on the machine which deployed its own Web server. Trojan Linux.Encoder.1 downloads files to the requirements to pay a ransom of Bitcoins and a file containing the path to the public RSA-key, then launched himself like a demon and removes the original files. This RSA-key is then used to store AES-keys with which the Trojan encrypts files on the victim machine. The Trojan first encrypts files in users’ home directories and catalogs relating to the administration of websites. Only after that bypasses Linux.Encoder.1 rest of the system. Encrypted files get a new extension .encrypted. On November 12, 2015, there were about 2 thousand. Website allegedly attacked cryptographer Linux.Encoder.1. However, this was not the only Trojan. Linux.Encoder.2 uses a different pseudo-random numbers for encryption uses a library OpenSSL (but not PolarSSL, in Linux.Encoder.1), encryption is implemented in the AES-OFB-128. Instead of an epilogue There is still a whole month, so it’s easy to imagine how the top-7 turns in the top 10. But while hackers are looking for zero-day vulnerabilities, the main danger is closer than you can imagine. This year, at an international forum on practical securityPositive Hack Days have sounded corny, but the eternal truth: its own employees of the companies are a major source of vulnerability. According to the analysis of 18 large state and commercial companies, some of which are included in the Fortune Global, it was revealed a significant reduction in the level of awareness of employees in safety issues. So take care of their counterparts in the first place. Do a good deed – remind them how important it is to monitor safety.
OPCFW_CODE
The integrated ITS is part of the SAP Web Application Server 6.40. It is automatically installed together with the SAP kernel. To be able to use a service via the integrated ITS, you must first follow the standard procedures for activation and configuration of the Internet Communication Manager (ICM). For more information, see Administration of the Internet Communication Manager. In addition, you must activate the service you want to execute and the service default_host/sap/public/bc/its/mimes in the Internet Communication Framework (ICF). For further information refer to . Make sure that besides the Internet service to be used also the two Internet services system and webgui (also known as SAP GUI for HTML) have been published to site INTERNAL, because objects of these services may also be used by other services. The ICF path for the webgui service is /default_host/sap/bc/gui/sap/its/. You can use this path to search for the service in transaction SICF. HTTP requests with path are forwarded to the request handler of the integrated ITS. The complete URL for access to the SAP system with SAP GUI for When using the integrated ITS, two profile parameters (see also Changing and Switching Profile Parameters) are of special importance: You use this parameter to deactivate (0) or activate (1) the integrated ITS. Even if the integrated ITS is activated, it only accesses system resources when it is actually used. Nevertheless, it can make sense to deactivate it to prevent users from accessing the SAP system with SAP GUI for HTML via special application servers (such as batch or update instances). Since the conversion of SAP screens into HTML pages uses additional CPU time, it makes sense to reserve a number of dedicated application servers to be used with SAP GUI for HTML and to use a special logon group to balance the load between them. The “global area“ is a memory shared by all work processes of the SAP kernel. The integrated ITS uses it for the runtime version of the HTML Business templates ("preparsed templates"). The memory required depends on the number and size of the templates used to display the services called by the users. The default value is large enough to use SAP GUI for HTML with one browser version in one logon language. If your users access the ITS with different languages or browsers (for example, Microsoft Internet Explorer and Netscape Navigator), or if they need additional services apart from SAP GUI for HTML, then the number of used templates will increase and you will have to adapt em/global_area_MB accordingly. You find information on when and how you should adapt this parameter in note 742048. In case the SAP GUI logon for HTML/IAC fails, see SAP Note 698329.
OPCFW_CODE